Keras: Loading saved model fails with ValueError You are trying to load a weight file containing 1 layers into a model with 0 layers

Created on 13 Jun 2018  Â·  59Comments  Â·  Source: keras-team/keras

This toy example

import sys
import keras
from keras import Sequential
from keras.activations import linear
from keras.engine import InputLayer
from keras.layers import Dense
from keras.losses import mean_squared_error
from keras.metrics import mean_absolute_error
from keras.models import load_model
from keras.optimizers import sgd

print("Python version: " + sys.version)
print("Keras version: " + keras.__version__)

model = Sequential()
model.add(InputLayer(batch_input_shape=(1, 5)))
model.add(Dense(10, activation=linear))
model.compile(loss=mean_squared_error, optimizer=sgd(), metrics=[mean_absolute_error])

model.save('test.h5')
del model
load_model('test.h5')

gives the following output/error

Using TensorFlow backend.
Python version: 3.6.5 (default, Apr 25 2018, 14:23:58) 
[GCC 4.2.1 Compatible Apple LLVM 9.1.0 (clang-902.0.39.1)]
Keras version: 2.2.0
2018-06-13 12:02:50.570395: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
Traceback (most recent call last):
  File "/Users/samb/IdeaProjects/connect-four-challenge-client-python3/test.py", line 22, in <module>
    load_model('test.h5')
  File "/Users/samb/IdeaProjects/connect-four-challenge-client-python3/venv/lib/python3.6/site-packages/keras/engine/saving.py", line 264, in load_model
    load_weights_from_hdf5_group(f['model_weights'], model.layers)
  File "/Users/samb/IdeaProjects/connect-four-challenge-client-python3/venv/lib/python3.6/site-packages/keras/engine/saving.py", line 901, in load_weights_from_hdf5_group
    str(len(filtered_layers)) + ' layers.')
ValueError: You are trying to load a weight file containing 1 layers into a model with 0 layers.

Looking at https://github.com/keras-team/keras/blob/2.2.0/keras/engine/saving.py#L883 when debugging, I see that in

    filtered_layers = []
    for layer in layers:
        weights = layer.weights
        if weights:
          filtered_layers.append(layer)

the value of weights is always the empty list [] whereas in the subsequent block

    layer_names = filtered_layer_names
    if len(layer_names) != len(filtered_layers):
        raise ValueError('You are trying to load a weight file '
                         'containing ' + str(len(layer_names)) +
                         ' layers into a model with ' +
                         str(len(filtered_layers)) + ' layers.')

the value of layer_names (respectively, filtered_layer_names), is the singleton list ['dense_1'] leading to the error message shown above.

I'm not quite certain what the cause of the problem is. Is something going wrong in saving the model? Or is something wrong when loading the model (before loading the weights)? Or is something wrong in this logic for loading the weights?
```

Most helpful comment

I see a workaround, but not a fix.
Would be better to re-open the issue.

All 59 comments

I've got the same problem after I updated keras from 2.0.8 to 2.2.
I can still load my old models but not the newly created.
I can also reproduce this error using your example code after changing line 22 to
keras.models.load_model('test.h5')

I hope someone can help.

edit:
Saving in two files as jason and weights didn't help either.

I managed to narrow down the problem, it seems to boil down to using the 'input_shape' parameter or not. The following code does not give the ValueError

import sys

import keras
from keras import Sequential
from keras import losses
from keras import metrics
from keras import optimizers
from keras.activations import linear
from keras.engine import InputLayer
from keras.layers import Dense
from keras.models import load_model

print("Python version: " + sys.version)
print("Keras version: " + keras.__version__)

model = Sequential()
model.add(InputLayer(batch_input_shape=(1, 5)))
model.add(Dense(10, input_shape=(5,), activation=linear))
model.compile(loss=losses.mean_squared_error,
              optimizer=optimizers.sgd(),
              metrics=[metrics.mean_absolute_error])

model.save('test.h5')
del model
load_model('test.h5')

The only difference to the code above is that now the dense layer has the additional parameter input_shape set.

Thanks!
Looks like I put a bracket wrong.

This works.
model.add(Bidirectional(LSTM( hidden_List[0], return_sequences=True, activation="tanh"), input_shape=shape))

This doesn't.
model.add(Bidirectional(LSTM( hidden_List[0], return_sequences=True, input_shape=shape, activation="tanh")))

It even throws a NotImplementedError.
I don't know why this error wasn't thrown on my larger project. But at last it works.

I had the same problem when I tried to fine-tune a vgg16 model. It happened when I upgraded from keras 2.1.6 to 2.2.0. The solution proposed above didn't work for me :( and the only way I found was to downgrade keras to the previous version (2.1.6).

I had the same problem: I cannot load a H5py model saved earlier. with "ValueError You are trying to load a weight file containing X layers into a model with 0 layers "

Downgrading to 2.1.0 solved my problem.

Yep, downgrading to Keras 2.1.x also solved the problem for me, too, just as reported by @jeffreynghm and @juliojj.

I have the same problem... Is it an error in Keras or in our code ? Do you think it will be fixed for the next update ?

The following code snippet isolates the error. It seems the problem happens when InputLayer is used. _model1_ saves and loads fine but _model2_ (the same single-layer model) fails. The only difference: _model2_ uses InputLayer.

from keras.models import Sequential, load_model
from keras.layers import Conv2D, Input, InputLayer

# without InputLayer
model1 = Sequential()
model1.add(Conv2D(32, (5, 5), input_shape=(64,64,3)))

model1.save('test.hdf5')
model_from_file = load_model('test.hdf5')
print("Loaded model 1 from file successful")

# with InputLayer --> causes error when loading saved model
model2 = Sequential()
model2.add(InputLayer(input_shape=(64,64,3)))
model2.add(Conv2D(32, (5, 5)))

model2.save('test2.hdf5')
model_from_file = load_model('test2.hdf5')
print("Loaded model 2 from file successful")

... continuing from above, I probed further and noticed that it runs into an issue the saved model_config differ here and is unable to load the weights.

From visual inspection, the main difference is that model_config for model2 (that uses InputLayer) does not have _batch_input_shape_ element for the conv layer.

I'm not exactly sure how to fix it. Just leaving bread crumbs for someone that is more familiar with the keras codebase.

I had this problem for a model with dropout in the input layer. It looked as follows:

    model = Sequential()
    model.add(Dropout(0.5))
    model.add(Dense(500, activation='relu', input_dim=inputDimension))
    ....

Commenting the line "model.add(Dropout(0.5))" out, fixed the weight loading issue for me.

As a follow up of @SamuelBucheliZ comment on the 14th:
The presence of InputLayer definitely triggers the issue.
In my case repeating the input_shape in both InputLayer and Dense constructors does the trick (no need of using batch_input_shape).

Something like:

model = Sequential()
model.add(InputLayer(input_shape=(5,)))
model.add(Dense(10, input_shape=(5,), activation=linear))

The problem appears to be that Sequential.get_config() references self.layers rather than self._layers

  def layers(self):
    # Historically, `sequential.layers` only returns layers that were added
    # via `add`, and omits the auto-generated `InputLayer` that comes at the
    # bottom of the stack.
    if self._layers and isinstance(self._layers[0], InputLayer):
      return self._layers[1:]
    return self._layers

so no input shape gets saved (no matter whether InputLayer was implicitly or explicitly added) and the delayed-build pattern is used, which does not set any weights, and so it looks like the model has no layers with weights.

Same issue using tensorflow 1.9 and Keras 2.2.2

Model definition without input_shape defined.

model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(128, activation = tf.nn.relu))
model.add(tf.keras.layers.Dense(128, activation = tf.nn.relu))
model.add(tf.keras.layers.Dense(10, activation = tf.nn.softmax))

Saving and loading the model

model.save('epic_num_reader.model')
tf.keras.models.load_model('epic_num_reader.model')

The full code is the same as the one in this video, https://www.youtube.com/watch?v=wQ8BIBpya2k

Error:

ValueError                                Traceback (most recent call last)
<ipython-input-33-0aee9b4f4fad> in <module>()
----> 1 tf.keras.models.load_model('epic_num_reader.model')

/usr/local/lib/python3.5/dist-packages/tensorflow/python/keras/engine/saving.py in load_model(filepath, custom_objects, compile)
    230 
    231     # set weights
--> 232     load_weights_from_hdf5_group(f['model_weights'], model.layers)
    233 
    234     if compile:

/usr/local/lib/python3.5/dist-packages/tensorflow/python/keras/engine/saving.py in load_weights_from_hdf5_group(f, layers)
    730                      'containing ' + str(len(layer_names)) +
    731                      ' layers into a model with ' + str(len(filtered_layers)) +
--> 732                      ' layers.')
    733 
    734   # We batch weight value assignments in a single backend call

ValueError: You are trying to load a weight file containing 3 layers into a model with 0 layers.

@ismaproco your problem might be different since you are not using InputLayer.

First, I see you're using tf.__version__ 1.9.0. You might want to upgrade to 1.10.0.
And you're invoking the Keras that comes with TensorFlow, so you would find the version as tf.keras.__version__ and it would probably be something like 2.1.6-tf

But, to diagnose the problem, from the command line run
pip install h5json
Note where the package is installed, then from the command line in the directory with your saved model
[installation directory]/h5json/h5tojson/h5tojson.py -d epic_num_reader.model >epic_num_reader.json
Then load the JSON file in your favorite editor and search for "input". It should look something like
"value": "{\"class_name\": \"Sequential\", \"config\": [{\"class_name\": \"InputLayer\", \"config\": {\"batch_input_shape\": [null, 28, 28], \"dtype\": \"float32\", \"sparse\": false, \"name\": \"sequential_input\"}}, {\"class_name\": \"Flatten\", \"config\": {\"name\": \"flatten\", \"trainable\": true, \"batch_input_shape\": [null, 28, 28], \"dtype\": \"float32\", \"data_format\": \"channels_last\"}},
Although you won't have the InputLayer there, you should see batch_input_shape in the Flatten definition.

@ismaproco your problem might be different since you are not using InputLayer.

First, I see you're using tf.__version__ 1.9.0. You might want to upgrade to 1.10.0.
And you're invoking the Keras that comes with TensorFlow, so you would find the version as tf.keras.__version__ and it would probably be something like 2.1.6-tf

But, to diagnose the problem, from the command line run
pip install h5json
Note where the package is installed, then from the command line in the directory with your saved model
[installation directory]/h5json/h5tojson/h5tojson.py -d epic_num_reader.model >epic_num_reader.json
Then load the JSON file in your favorite editor and search for "input". It should look something like
"value": "{\"class_name\": \"Sequential\", \"config\": [{\"class_name\": \"InputLayer\", \"config\": {\"batch_input_shape\": [null, 28, 28], \"dtype\": \"float32\", \"sparse\": false, \"name\": \"sequential_input\"}}, {\"class_name\": \"Flatten\", \"config\": {\"name\": \"flatten\", \"trainable\": true, \"batch_input_shape\": [null, 28, 28], \"dtype\": \"float32\", \"data_format\": \"channels_last\"}},
Although you won't have the InputLayer there, you should see batch_input_shape in the Flatten definition.

thank @MmAlder upgrading to tf 1.10.0 fixed my issue

model.add(tf.keras.layers.Flatten()) # takes our 28x28 and makes it 1x784
CHANGE THE PREVIOUS LINE TO
model.add(tf.keras.layers.Flatten(input_shape=(28, 28)))

#ERROR RESOLVED : ValueError: You are trying to load a weight file containing 3 layers into a model with 0 layers.

@prativadas That doesn't really resolve the problem. It's just a work-around. Remember that the model was built, trained, and saved without difficulty. The error arises when attempting to load the model. Changing the way the model was built can avoid the problem, but when you're trying to load a trained model, rebuilding it is often not an option. It is a better idea to fix the code. Changing Sequential.get_config() as I described above would do that. I did it locally and it works for me, but I didn't exhaustively test it and I don't have a copy of the GitHub repo to generate a pull request anyway.

for some information, the input layer infomation is missing in 2.2.0, compare to 2.1.6

the object

{
        "class_name": "InputLayer",
        "config": {
            "batch_input_shape": [null, 64, 64, 3],
            "dtype": "float32",
            "sparse": false,
            "name": "input_1"
        }
} 

is missing

in 2.2.0

{
    "class_name": "Sequential",
    "config": [{
        "class_name": "Conv2D",
        "config": {
            "name": "conv2d_2",
            "trainable": true,
            "filters": 32,
            "kernel_size": [5, 5],
            "strides": [1, 1],
            "padding": "valid",
            "data_format": "channels_last",
            "dilation_rate": [1, 1],
            "activation": "linear",
            "use_bias": true,
            "kernel_initializer": {
                "class_name": "VarianceScaling",
                "config": {
                    "scale": 1.0,
                    "mode": "fan_avg",
                    "distribution": "uniform",
                    "seed": null
                }
            },
            "bias_initializer": {
                "class_name": "Zeros",
                "config": {}
            },
            "kernel_regularizer": null,
            "bias_regularizer": null,
            "activity_regularizer": null,
            "kernel_constraint": null,
            "bias_constraint": null
        }
    }]
}

in 2.1.6

{
    "class_name": "Sequential",
    "config": [{
        "class_name": "InputLayer",
        "config": {
            "batch_input_shape": [null, 64, 64, 3],
            "dtype": "float32",
            "sparse": false,
            "name": "input_1"
        }
    }, {
        "class_name": "Conv2D",
        "config": {
            "name": "conv2d_2",
            "trainable": true,
            "filters": 32,
            "kernel_size": [5, 5],
            "strides": [1, 1],
            "padding": "valid",
            "data_format": "channels_last",
            "dilation_rate": [1, 1],
            "activation": "linear",
            "use_bias": true,
            "kernel_initializer": {
                "class_name": "VarianceScaling",
                "config": {
                    "scale": 1.0,
                    "mode": "fan_avg",
                    "distribution": "uniform",
                    "seed": null
                }
            },
            "bias_initializer": {
                "class_name": "Zeros",
                "config": {}
            },
            "kernel_regularizer": null,
            "bias_regularizer": null,
            "activity_regularizer": null,
            "kernel_constraint": null,
            "bias_constraint": null
        }
    }]
}

all above was created by code provide by iretiayo commented on 25 Aug

from keras.models import Sequential, load_model
from keras.layers import Conv2D, Input, InputLayer

# without InputLayer
# model1 = Sequential()
# model1.add(Conv2D(32, (5, 5), input_shape=(64,64,3)))
# 
# model1.save('keras-2.2.0-model1.hdf5')
# model_from_file = load_model('keras-2.2.0-model1.hdf5')
# print("Loaded model 1 from file successful")

# with InputLayer --> causes error when loading saved model
model2 = Sequential()
model2.add(InputLayer(input_shape=(64,64,3)))
model2.add(Conv2D(32, (5, 5)))

model2.save('keras-2.2.0-model2.hdf5')
model_from_file = load_model('keras-2.2.0-model2.hdf5')
print("Loaded model 2 from file successful")

An alternative :- The reason for this error is :- Some Python interpreters demand a "SKELETON" of Neural network before loading another neural network into it. So, in my loading file, I created exactly same Neural Network, compiled it. But, inside the "model.fit()" method, I passed epochs=0 and used "load_weights" function with required "h5" file as parameters. Thus, our model will be compiled but not trained as we will directly load the weights from out already trained model.

model = tf.keras.models.Sequential()

Solving the Layers mismatch issue

model.add(tf.keras.layers.Flatten())

Hidden Layers

model.add(tf.keras.layers.Dense(128,activation=tf.nn.relu,input_dim=784))
model.add(tf.keras.layers.Dense(128,activation=tf.nn.relu,input_dim=784))

Output Layer

model.add(tf.keras.layers.Dense(10,activation=tf.nn.softmax))

Model Architecture Created. Now, using the model:-

model.compile(optimizer='adam', loss='sparse_categorical_crossentropy',metrics=['accuracy'])

Training the Model:-

model.fit(x_train,y_train,epochs=0)
model.load_weights('NumberRecognitionModelWeights.h5')

@Ketan14a So you "solve" the problem with load_model() simply by not using it? What if you don't know the structure of the model?

Well, if you are unaware of Structure, then I guess Downgrading Keras to older versions can resolve this issue. In older Keras versions, this issue never arose.

I suggest this thread be renamed "Models with InputLayer are not serialized to HDF5 correctly". Below is a demonstration of the issue and a hack to fix existing saved models.

docker run -it tensorflow/tensorflow:1.11.0-py3 /bin/bash
apt-get update
apt-get install python3-venv git hdf5-tools
python3 -m venv env
source env/bin/activate
pip install keras tensorflow
pip install git+git://github.com/keras-team/keras.git --upgrade --no-deps
python test.py
import keras
from keras.models import Sequential, load_model
from keras.layers import Dense, Input, InputLayer

print('keras.__version__=', keras.__version__)

fname1 = 'test1.h5'
model1 = Sequential()
model1.add(Dense(1, input_shape=(64, 64), name='dense'))
model1.compile(loss='mse', optimizer='adam')
model1.save(fname1)
model1 = load_model(fname1)

fname2 = 'test2.h5'
model2 = Sequential()
model2.add(InputLayer((64,64), name='input'))
model2.add(Dense(1, name='dense'))
model2.compile(loss='mse', optimizer='adam')
model2.save(fname2)
# ValueError: You are trying to load a weight file containing 1 layers into a model with 0 layers
model2 = load_model(fname2)

keras.__version__= 2.2.4

$ h5dump -A test1.h5 > test1.structure
$ h5dump -A test2.h5 > test2.structure
$ diff test1.structure test2.structure
...
<       (0): "{"class_name": "Sequential", "config": {"name": "sequential_1", "layers": [{"class_name": "Dense", "config": {"units": 1, "kernel_constraint": null, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "use_bias": true, "dtype": "float32", "activation": "linear", "kernel_initializer": {"class_name": "VarianceScaling", "config": {"seed": null, "distribution": "uniform", "mode": "fan_avg", "scale": 1.0}}, "batch_input_shape": [null, 64, 64], "bias_constraint": null, "activity_regularizer": null, "name": "dense", "bias_regularizer": null, "trainable": true}}]}}"
---
>       (0): "{"class_name": "Sequential", "config": {"name": "sequential_2", "layers": [{"class_name": "Dense", "config": {"units": 1, "kernel_constraint": null, "bias_initializer": {"class_name": "Zeros", "config": {}}, "kernel_regularizer": null, "use_bias": true, "trainable": true, "activation": "linear", "kernel_initializer": {"class_name": "VarianceScaling", "config": {"seed": null, "distribution": "uniform", "mode": "fan_avg", "scale": 1.0}}, "bias_constraint": null, "activity_regularizer": null, "name": "dense", "bias_regularizer": null}}]}}"
...

The test1 has the additional structure: "batch_input_shape": [null, 64, 64], "dtype": "float32",

You can fix this using:

import json
import h5py

def fix_layer0(filename, batch_input_shape, dtype):
    with h5py.File(filename, 'r+') as f:
        model_config = json.loads(f.attrs['model_config'].decode('utf-8'))
        layer0 = model_config['config'][0]['config']
        layer0['batch_input_shape'] = batch_input_shape
        layer0['dtype'] = dtype
        f.attrs['model_config'] = json.dumps(model_config).encode('utf-8')

# Example
fix_layer0('test2.h5', [None, 64, 64], 'float32')

Thanks a lot @cyounkins

I can now continue training my VGG16 model on Keras 2.2.4 just needed to set the shape to [None, 256, 256,3] and configuration to change was at ['config']['layers'][0]['config'] instead of at ['config'][0]['config']

Closing as this is resolved

@wt-huang Was there a commit that resolved it?

I see a workaround, but not a fix.
Would be better to re-open the issue.

Btw, this is happening in a recent version with models save in the same session:

In [17]: tf.keras.__version__
Out[17]: '2.2.4-tf'

In [18]: tf.__version__
Out[18]: '1.14.1-dev20190311'

@cottrell Did you find the solution?

@cottrell @KhanAAI The only way I was able to get this to work was to downgrade from keras 2.2.* to 2.1.* (specifically I used 2.1.6)

I was sub classing model and did not provide an input shape when building the model. It seems you can bypass this issue by first calling the model on a sample input then loading the weights into the model.

Note that I am using tensorflow module. It might also work here.

For Example:

input_shape = (100,100,1)
model = tf.keras.model.Model()
model.add(tf.keras.layers.Conv2D(filters=3,kernel_size=3,strides=1,activation=tf.nn.relu,use_bias=True)
model(np.zeros(input_shape))
model.layers[0].set_weights(weights)

...

This seems to work for me.

Closing as this is resolved

I really dislike when a 30-or-more-comments issue gets closed with a mere "closed as resolved".
It does not provide any context, info on how, by whom, and why the issue is considered as solved.

A summary of the workaround (cannot be called solutions) I found in the thread:

  • Downgrade to keras 2.1.0
  • Don't use InputLayer

In the case you spent your night training a model and you're pissed that you can retrieve that model, a ~solution~ workaround I found helpful is:

  • Re-create the model architecture
  • Load the model file as a weights file
model_path = "yourpath.hdf5"  # it contains the architecture and the weights
model = create_model()  # the same you used to craete the model before training
model.load_weights(model_path)

I opened #11683 in response to this being closed, and as far as I know this is still an issue. #11683 better describes the situation as "Models with InputLayer are not serialized to HDF5 correctly" because as the h5dump output shows, equivalent models are not serialized the same.

I'm using Keras via tensorflow-gpu 1.13.1, I'm not using Input however I do use the input shape attribute on Conv2d (which might be using Input internally perhaps).

Conv2D(3, (5, 5), padding="same", input_shape=inputShape)

@cottrell @KhanAAI The only way I was able to get this to work was to downgrade from keras 2.2.* to 2.1.* (specifically I used 2.1.6)

I've moved on to subclassing keras api in order to use tf 2.0 with the graph etc. Haven't yet gone back to the serialization problem since that path changes the approach I think.

save files with .hdf5 instead of h5

save files with .hdf5 instead of h5

Thank you. I'll give this a try when I get a chance!

If you use multi-GPU to train your model,you may get this probelm. You can use the this code to train,
parallel_model = keras.utils.multi_gpu_model(model, 2) .

I consider this a high priority bug. Why has the issue been closed?

I'm seeing something similar with tf.keras: https://github.com/tensorflow/tensorflow/issues/28668. This seems relatively high priority since I'm currently unable to save any tf.keras models.

Not sure if it also helps you, but I could circumvent the issue by installing Keras 2.1.6

Note: You have to save the model using this version of Keras in order to be able to load it. Models saved using the latest versions didn't work for me.

@renatobellotti
Thanks for your message. I agree with you. I had keras 2.2.4 and ran into the same problem with load_model (which really pissed me off...). Then I upgraded keras to 2.1.6 and it works.

did not work for me with keras 2.1.6, I am using windows10+python3

did not work for me with keras 2.1.6, I am using windows10+python3

Have you saved the model using keras 2.1.6 as well?

No, cause I am using a model from other people, he said he saved the model with 2.1.6, but not work with mine....

It still happens to me in version 2.3.0

Problem still present in 2.3.1, in exactly the same form.

I'm wondering why this issue is closed as the problem has not been resolved.

@wt-huang

Had the same problem. What worked was stopping using anything from normal keras, and just using tf.keras everywhere

Had the same error when trying to fine tune a vgg16 model with keras version 2.2.4.
As @juliojj suggested downgrading keras into version 2.1.6 solved my problem.

from imageai.Prediction.Custom import ModelTraining
import os

trainer = ModelTraining()
trainer.setModelTypeAsYOLOv3()
trainer.setDataDirectory("/content/drive/My Drive/Colab Notebooks/jersey")
trainer.trainModel(num_objects=10, num_experiments=50, enhance_data=True, batch_size=8, show_network_summary=True, continue_from_model="/content/drive/My Drive/Colab Notebooks/jersey/models/detection_model-ex-053--loss-0007.979.h5")

this is the error i keep facing


AttributeError Traceback (most recent call last)
in ()
3
4 trainer = ModelTraining()
----> 5 trainer.setModelTypeAsYOLOv3()
6 trainer.setDataDirectory("/content/drive/My Drive/Colab Notebooks/jersey")
7 trainer.trainModel(num_objects=10, num_experiments=50, enhance_data=True, batch_size=8, show_network_summary=True, continue_from_model="/content/drive/My Drive/Colab Notebooks/jersey/models/detection_model-ex-053--loss-0007.979.h5")

AttributeError: 'ModelTraining' object has no attribute 'setModelTypeAsYOLOv3'

PLEASE ANY HELP??

any progress?

This was a long time ago but for anyone hitting this I have a feeling you need to call the model once (build it) before saving. This is likely the slightly opaque error.

@cottrell's suggestion was the only one that I could get to work.

Same issue in 2.3.1. I don't know why this long standing issue has been closed and not looked upon. I went ahead and suggested Keras to my team instead of our PyTorch and here I am with this bug.

I solved this so easily.

Just call the .build() method with the input_shape parameter.

model.load_weights('my_weights.hdf5')
model.build(input_shape=(1, 224, 224, 3))

Had the same problem. What worked was stopping using _anything_ from normal keras, and just using tf.keras everywhere

i am using tf.keras every where, but still facing this problem

I implemented cGAN with Tensorflow 1.13 and code was running fine on Windows and Linux, but when I upgraded to tensorflow 2.2 I had similar issue.
After debugging the model by removing some of the layers, I found that ONE of the Leaky ReLU layers (layers.LeakyReLU()(x)) was causing the issue. By removing it or replacing with other activation layers, the problem was solved for me but still don't know why.

I had similar bug and don't know, why this issue was closed. Hello from 2020!

Yep, downgrading to Keras 2.1.x also solved the problem for me, too, just as reported by @jeffreynghm and @juliojj.

!!!
ImportError: Keras requires TensorFlow 2.2 or higher. Install TensorFlow via pip install tensorflow

I solved this so easily.

Just call the .build() method with the input_shape parameter.

model.load_weights('my_weights.hdf5')
model.build(input_shape=(1, 224, 224, 3))

This require to pass the input shape. So if you need to have a flexible shape (as in text processing, or simply if you have several different shape of images) it won't work.
This issue is not happening when saving in tensorflow format.
Maybe it will be better if keras hdf5 saving format had the same proprety as tensorflow saving format ?

Was this page helpful?
0 / 5 - 0 ratings

Related issues

farizrahman4u picture farizrahman4u  Â·  3Comments

fredtcaroli picture fredtcaroli  Â·  3Comments

nryant picture nryant  Â·  3Comments

LuCeHe picture LuCeHe  Â·  3Comments

snakeztc picture snakeztc  Â·  3Comments