Please make sure that the boxes below are checked before you submit your issue. Thank you!
Not sure what the specific cause is yet and how to isolate it, but I'm having some issues using the load_model
method. I get the below error complaining about weight shape for the optimizer.
Exception Traceback (most recent call last)
<ipython-input-7-946052120828> in <module>()
7
8 if args.model_js == "":
----> 9 model = load_model(args.model_h5, custom_layer_dict)
10 else:
11 model = model_from_json(open(args.model_js).read(), custom_layer_dict)
/usr/local/lib/python2.7/dist-packages/keras/models.pyc in load_model(filepath, custom_objects)
165 optimizer_weight_names = [n.decode('utf8') for n in optimizer_weights_group.attrs['weight_names']]
166 optimizer_weight_values = [optimizer_weights_group[n] for n in optimizer_weight_names]
--> 167 model.optimizer.set_weights(optimizer_weight_values)
168 f.close()
169 return model
/usr/local/lib/python2.7/dist-packages/keras/optimizers.pyc in set_weights(self, weights)
95 str(pv.shape) +
96 ' not compatible with '
---> 97 'provided weight shape ' + str(w.shape))
98 weight_value_tuples.append((p, w))
99 K.batch_set_value(weight_value_tuples)
Exception: Optimizer weight shape (11, 1, 188, 208) not compatible with provided weight shape (208, 188, 11, 1)
image ordering does not impact this. The model has some 1D convolutions, fully connected layers, merging, and some custom layers. If I instead use model_from_json
and them call load_weights
then no problem occurs.
I have kind of the same problem reported here: #3974
I can save and load the model before I train it. but I could not load the model which is saved after training with the same error:
Exception: Optimizer weight shape (64,) not compatible with provided weight shape (64, 3, 3, 3)
I installed HDFview and opened the hdf5 file and deleted the optimizer part of the file. Now I can load the model and it works like a charm ! However, I think it is a bug and should get fixed.
This worked for me. Thanks
I solved the problem by using load_weights function is keras.model. My solution is at issue #4044 . Maybe it's useful for you.
I am afraid, this issue is not solved, and should be re-opened, as it still occurs in the latest Keras version.
I solved the issue by making sure that the exact version and subversion of the environment that creates and loads the model are the same. Ex. Im creating and training the model with keras 2.0.3, the VM that loads it has to run keras 2.0.3 as well, 2.0.0 won't work.
I solved the issue by upgrading from 2.0.2 to 2.0.3. Perhaps there should be a way to maintain interoperability? @kiennguyen94
Do you have a specific example of a script that:
I've trained a model with the following code: https://gist.github.com/apacha/f52935d1225ccbb21d66fd5f4011d387
Note two things: I've omitted the code for actually loading the data (see this repository for the entire code). And it works with other models like this one.
Interestingly this code works:
best_model = training_configuration.classifier() # recreate the model
best_model.load_weights(best_model_path)
otherwise I get this error:
Traceback (most recent call last):
File "C:/Users/Alex/Repositories/MusicScoreClassifier/ModelGenerator/TrainModel.py", line 93, in <module>
best_model = keras.models.load_model(best_model_path)
File "C:\Programming\Anaconda3\lib\site-packages\keras\models.py", line 280, in load_model
model.optimizer.set_weights(optimizer_weight_values)
File "C:\Programming\Anaconda3\lib\site-packages\keras\optimizers.py", line 79, in set_weights
'provided weight shape ' + str(w.shape))
ValueError: Optimizer weight shape (128,) not compatible with provided weight shape (16,)
I'm using Keras 2.0.3 and Tensorflow-Gpu 1.0.1
+1 The issue is still not solved.
Same here
All of my transfer learning models (resnet50, inception, exception) do not load after saving. I have to delete optimizer parameters (from hdf5 file) from the model, but then it won't work if I want to continue training.
Python 3.5 Keras 2.0.3 Tensorflow Windows 64. I tried different optimizers (Adam, Nadam, SGD) and it doesn't seem to be optimizer dependent.
@sakvaua did you save the model in Keras 2.0.2?
I had saved the model in Keras 2.0.3 and tried to load it in 2.0.2. When I upgraded and tried loading it in 2.0.3 it worked.
It seems a fix could be to modify load_model
so that it doesn't load the optimizer.
@zafarali. Hmmm. I have 2.0.4 and models are saved and loaded during the same session.
Still have same issue with 2.0.4
It seems to be related to creating new optimizers or reusing layers in different models.
I solved the issue by compelling congruency between the versions of Keras installed on my different machines.
It makes sense to think working with the same model across different versions of the library could engender hiccups due to discrepancies in convention employed in said versions.
First I looked up what versions were installed on my different machines (Guide here: https://stackoverflow.com/a/10215100/2661720),
and for the machine on which I was loading the model, I overwrote the existing Keras version with that installed on the model's parent machine (Guide here: https://stackoverflow.com/a/33812968/2661720).
This was the thought process that sublimated the problem for me.
Seems to me that this error comes up when I try to load too many models at the same time. Currently I'm loading a number of different models into a test script to be tested one at a time. Is there a way to force tensorflow to forget a graph? Bet that would solve it for people in my situation.
@tstandley
https://keras.io/backend/
clear_session()
Destroys the current TF graph and creates a new one. Useful to avoid clutter from old models / layers.
@sakvaua Thanks. You're awesome! Wish I knew about this sooner.
A more informative error message would also help, but I don't know how to distinguish the two scenarios.
Ensuring the same version of Keras is installed in both settings (2.0.6 in my case) solved the problem.
Closing since the issue is fixed with the current version.
Most helpful comment
I am afraid, this issue is not solved, and should be re-opened, as it still occurs in the latest Keras version.