Models saved using custom metrics throw an exception on loading:
def x(y_true, y_pred):
return 0.0 * y_pred
""" define model here """
model.compile(...,
metrics=[x])
model.save('somefile')
break_here = load_model('somefile')
This produces:
Using Theano backend.
Traceback (most recent call last):
File "test-case.py", line 14, in <module>
break_here = load_model('somefile')
File "/usr/local/lib/python3.5/dist-packages/keras/models.py", line 155, in load_model
sample_weight_mode=sample_weight_mode)
File "/usr/local/lib/python3.5/dist-packages/keras/models.py", line 547, in compile
**kwargs)
File "/usr/local/lib/python3.5/dist-packages/keras/engine/training.py", line 662, in compile
metric_fn = metrics_module.get(metric)
File "/usr/local/lib/python3.5/dist-packages/keras/metrics.py", line 104, in get
return get_from_module(identifier, globals(), 'metric')
File "/usr/local/lib/python3.5/dist-packages/keras/utils/generic_utils.py", line 16, in get_from_module
str(identifier))
Exception: Invalid metric: x
The only workaround seems to be to edit metrics.py directly, which is a little rough.
Here's a gist that reproduces the problem.
I'd like keras to ignore missing metrics when loading, instead of stopping. In an ideal world, it could even search the current scope for appropriate functions.
Loading a model with load_model('model.h5')
that has custom metrics (not defined in metrics.py
) also fails for me.
File "/Users/angermue/python/keras/keras/models.py", line 155, in load_model
sample_weight_mode=sample_weight_mode)
File "/Users/angermue/python/keras/keras/engine/training.py", line 671, in compile
metric_fn = metrics_module.get(metric)
File "/Users/angermue/python/keras/keras/metrics.py", line 155, in get
return get_from_module(identifier, globals(), 'metric')
File "/Users/angermue/python/keras/keras/utils/generic_utils.py", line 16, in get_from_module
str(identifier))
Exception: Invalid metric: f1
The problem is that compile()
in training.py
only looks in metrics.py
for the metric definitions, not in the scope where load_model()
is executed.
Restoring a model from a json file worked for me.
This should not be too hard to fix.
I have met with the same problem.
Any update on this? Can the JSON workaround be used with model checkpoints?
Is there a way to delete custom metric definitions from the h5 file as another workaround?
Doesn't the custom_objects
parameter for load_model
solve most of these cases? In @leondz's example:
load_model('somefile', custom_objects={'x': x})
@gvtulder Yes, that works!
@gvtulder It works! However, the load_model() is almost twice as slow as the way that I build the model again and use load_weights(). Don't know the reason.
Same problem. The neural network with merged layers cannot load model with customized metrics.
I have the same problem and passing custom_objects does help with load_model. However I can use it with load_from_json.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed after 30 days if no further activity occurs, but feel free to re-open a closed issue if needed.
To load a neural network with merged layers and custom metrics, I saved the weights (not the model) and recreated the model during testing, where instead of calling fit, I load the weights into the recreated model, thus eliminating the training. Not the most elegant solution but I can confirm that this atleast works.
^^ load_weights from raw model will usually work.
If it does not (it happened to me for some model with custom layers), you can train that raw model for just 1 epoch, and then use load_weights to load your well-trained weights saved before.
Further extension:
Maybe you will define a custom metrics in the model.compile
process. And then you can load the model like below:
def custom_auc(y_true, y_pred):
pass
model.compile(metrics=[custom_auc])
# load model
from deepctr.layers import custom_objects
custom_objects["custom_auc"] = custom_auc
model = tf.keras.models.load_model(self.input_model_file, custom_objects=custom_objects)
Doesn't the
custom_objects
parameter forload_model
solve most of these cases? In @leondz's example:load_model('somefile', custom_objects={'x': x})
thanks it really help
Am I missing something here?
I see the proposed solution as a workaround, but not a solution. I would rather like to load the model without the custom metrics - and knowlingly disregarding them.
Might this not be feasible in a scenario where I train a model and save it to solely use it for inference in a later stage? I wouldn't need these custom metrics for inference, would I?
@sebastianfast
I second that - we should be able to load a model without custom metrics.
@sebastianfast
I second that - we should be able to load a model without custom metrics.
In TF 2, you can load the model without compilation. See https://www.tensorflow.org/api_docs/python/tf/keras/models/load_model#arguments_1 .
This allows you to call .compile with a different set of metrics, such as standard metrics only, but it will lose your optimizer state.
Most helpful comment
Doesn't the
custom_objects
parameter forload_model
solve most of these cases? In @leondz's example: