Keras: load_model: No model found in config file

Created on 31 Jan 2017  路  8Comments  路  Source: keras-team/keras

I converted a few caffe models to Keras using the MarcBS fork. However, upon attempting to load the models, I get this error:
>>> x = keras.models.load_model('vgg16_weights.h5') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/tigress/dchouren/thesis/git/keras-marcbs/keras/models.py", line 138, in load_model raise ValueError('No model found in config file.') ValueError: No model found in config file.

I then downloaded a pretrained VGG16 model from here: https://gist.github.com/baraldilorenzo/07d7802847aaad0a35d3

Attempts to load this model were met with the same error.

Using latest version of Keras (cloned and used python setup.py develop) with Python 3.5.2 and Theano backend. Have not changed anything in Keras develop except for adding some print lines. I am using the non-forked Keras when I attempted to load these models, although the forked version also fails with the same error.

Anyone see this before?

Most helpful comment

One possible and somewhat obvious cause of this (thanks yxfGrace for the hint) is saving a model using the Checkpoint callback with "save_weights_only" set to True. In this case you would need to instantiate the model using your original code then load the weights from file, as the model structure details won't be serialised, just the weights. This was the cause of this error for me, at least. To fix so you can use load_model, you can set "save_weights_only" to False when setting up the callback.

All 8 comments

Was this resolved? I'm having the same error.

Yes--there ended up not being a bug for me. If you're having problems with the caffe to Keras conversion, the MarcBS fork just converts layer weights into a form that you can load into an existing model architecture--it doesn't create a model with those weights from scratch. The weight conversion / loading is working fine for me, it was just that the output 'Finished storing the converted model...' confused me as to what was being saved (just layer weights, and not a model!)

   To export a keras model(.h5) to tensorflow(.pb) ......

I have used following code onto my model :

import keras
import tensorflow
from keras import backend as K
from tensorflow.contrib.session_bundle import exporter
from keras.models import model_from_config, Sequential

print("Loading model for exporting to Protocol Buffer format...")
model_path = "C:/Users/User/buildingrecog/model.h5"
model = keras.models.load_model(model_path)

K.set_learning_phase(0) # all new operations will be in test mode from now on
sess = K.get_session()

# serialize the model and get its weights, for quick re-building
config = model.get_config()
weights = model.get_weights()

# re-build a model where the learning phase is now hard-coded to 0
new_model = Sequential.model_from_config(config)
new_model.set_weights(weights)

export_path = "C:/Users/User/buildingrecog/khi_buildings.pb" # where to save the exported graph
export_version = 1 # version number (integer)

saver = tensorflow.train.Saver(sharded=True)
model_exporter = exporter.Exporter(saver)
signature = exporter.classification_signature(input_tensor=model.input, scores_tensor=model.output)
model_exporter.init(sess.graph.as_graph_def(), default_graph_signature=signature)
model_exporter.export(export_path, tensorflow.constant(export_version), sess)

but got this error:


ValueError Traceback (most recent call last) in ()
7 print("Loading model for exporting to Protocol Buffer format...")
8 model_path = "C:/Users/User/buildingrecog/model.h5" ---->
9 model = keras.models.load_model(model_path)
10 11 K.set_learning_phase(0) # all new operations will be in test mode from now on C:\Users\User\Anaconda3\lib\site-packages\keras\models.py in load_model(filepath, custom_objects) 228 model_config = f.attrs.get('model_config') 229 if model_config is None: --> 230 raise ValueError('No model found in config file.') 231 model_config = json.loads(model_config.decode('utf-8')) 232 model = model_from_config(model_config, custom_objects=custom_objects)
ValueError: No model found in config file.

Please help me to solve this..!!

the same problem, I am also facing. I am using keras with theano as backend. waiting for your help.

same problem :(

I have the same question, but I've solved.
you can build the Sequential model again, then use load_weights but not load_model

One possible and somewhat obvious cause of this (thanks yxfGrace for the hint) is saving a model using the Checkpoint callback with "save_weights_only" set to True. In this case you would need to instantiate the model using your original code then load the weights from file, as the model structure details won't be serialised, just the weights. This was the cause of this error for me, at least. To fix so you can use load_model, you can set "save_weights_only" to False when setting up the callback.

try to use model.save(), might solve the prob!

Was this page helpful?
0 / 5 - 0 ratings