Keras: Loading model with custom loss function: ValueError: 'Unknown loss function'

Created on 22 Mar 2017  ·  56Comments  ·  Source: keras-team/keras

I trained and saved a model that uses a custom loss function (Keras version: 2.0.2):

model.compile(optimizer=adam, loss=SSD_Loss(neg_pos_ratio=neg_pos_ratio, alpha=alpha).compute_loss)

When I try to load the model, I get this error:

ValueError: ('Unknown loss function', ':compute_loss')

This is the stack trace:

ValueError                                Traceback (most recent call last)
<ipython-input-76-52ca495a8e09> in <module>()
----> 1 model, layer_dict, classifier_sizes = load_model('./model_0.h5')

/Users/pierluigiferrari/anaconda/envs/carnd-term1/lib/python3.5/site-packages/keras/models.py in load_model(filepath, custom_objects)
    258                   metrics=metrics,
    259                   loss_weights=loss_weights,
--> 260                   sample_weight_mode=sample_weight_mode)
    261 
    262     # Set optimizer weights.

/Users/pierluigiferrari/anaconda/envs/carnd-term1/lib/python3.5/site-packages/keras/engine/training.py in compile(self, optimizer, loss, metrics, loss_weights, sample_weight_mode, **kwargs)
    738             loss_functions = [losses.get(l) for l in loss]
    739         else:
--> 740             loss_function = losses.get(loss)
    741             loss_functions = [loss_function for _ in range(len(self.outputs))]
    742         self.loss_functions = loss_functions

/Users/pierluigiferrari/anaconda/envs/carnd-term1/lib/python3.5/site-packages/keras/losses.py in get(identifier)
     88     if isinstance(identifier, six.string_types):
     89         identifier = str(identifier)
---> 90         return deserialize(identifier)
     91     elif callable(identifier):
     92         return identifier

/Users/pierluigiferrari/anaconda/envs/carnd-term1/lib/python3.5/site-packages/keras/losses.py in deserialize(name, custom_objects)
     80                                     module_objects=globals(),
     81                                     custom_objects=custom_objects,
---> 82                                     printable_module_name='loss function')
     83 
     84 

/Users/pierluigiferrari/anaconda/envs/carnd-term1/lib/python3.5/site-packages/keras/utils/generic_utils.py in deserialize_keras_object(identifier, module_objects, custom_objects, printable_module_name)
    155             if fn is None:
    156                 raise ValueError('Unknown ' + printable_module_name,
--> 157                                  ':' + function_name)
    158         return fn
    159     else:

ValueError: ('Unknown loss function', ':compute_loss')
  • [x] Check that you are up-to-date with the master branch of Keras. You can update with:
    pip install git+git://github.com/fchollet/keras.git --upgrade --no-deps

  • [x] If running on TensorFlow, check that you are up-to-date with the latest version. The installation instructions can be found here.

stale

Most helpful comment

I solved this problem by adding 'custom_bojects'

model = load_model('model/multi_task/try.h5', custom_objects={'loss_max': loss_max})

my loss function:

def loss_max(y_true, y_pred):
    from keras import backend as K
    return K.max(K.abs(y_pred - y_true), axis=-1)

All 56 comments

I have this exact same error and just noticed this morning.

My metrics are very simple:

#-----------------------------------------------------------------------------------------------------------------------------------------------------
# PFA, prob false alert for binary classifier
def binary_PFA(y_true, y_pred):
    # N = total number of negative labels
    N = K.sum(1 - K.round(y_true))
    # FP = total number of false alerts, alerts from the negative class labels
    FP = K.sum(K.round(y_pred) - K.round(y_pred) * K.round(y_true))    
    return FP/N
#-----------------------------------------------------------------------------------------------------------------------------------------------------
# P_TA prob true alerts for binary classifier
def binary_PTA(y_true, y_pred):
    # P = total number of positive labels
    P = K.sum(K.round(y_true))
    # TP = total number of correct alerts, alerts from the positive class labels
    TP = K.sum(K.round(y_pred) * K.round(y_true))    
    return TP/P
#-----------------------------------------------------------------------------------------------------------------------------------------------------

myOptimizer = keras.optimizers.adadelta(lr=1.0)
model.compile(loss='binary_crossentropy', optimizer = myOptimizer, metrics=[keras.metrics.binary_accuracy, metrics.binary_PTA, metrics.binary_PFA])

It looks like its trying to find the fn in generic_utils.py for a function_name which is binary_PTA which isnt found in custom_objects. @fchollet How do we add our metric to custom_objects?

I did this as a work around:

model = model_from_json(open(modelFile).read())
model.load_weights(os.path.join(os.path.dirname(modelFile), 'model_weights.h5'))

@isaacgerg what Keras version are you running? I got the error described in the old title in Keras 2.0.0, now after updating to 2.0.2 I'm getting a new error (as described in the new title).

But yeah, for the moment, saving and loading the weights separately is the way to go as a workaround.

I use 2.0.1

Hi, the same problem here.
One ugly solution that worked for me is to include the custom objective into keras:

import keras.losses
keras.losses.custom_loss = custom_loss

This is a know issue on keras 1 #3977.
On keras 2.0 you have to replace keras.objectives to keras.losses.

My work around is to load the json first and then load the weights. I keep my load function in a local lib with the rest of my keras workarounds ;)

This PR should have fixed this issue.

Can use:

from keras.utils.generic_utils import get_custom_objects
import SSD_Loss

loss = SSD_Loss(neg_pos_ratio=neg_pos_ratio, alpha=alpha)
get_custom_objects().update({"SSD_Loss": loss.computeloss})

Same here.
I tried @joeyearsley's workaround and seems loading correctly.

I solved this problem by adding 'custom_bojects'

model = load_model('model/multi_task/try.h5', custom_objects={'loss_max': loss_max})

my loss function:

def loss_max(y_true, y_pred):
    from keras import backend as K
    return K.max(K.abs(y_pred - y_true), axis=-1)

My workaround is like @dluvizon except I assigned keras.losses.loss. That was the missing function name according to the error message.

/Users/apiccolboni/anaconda/lib/python2.7/site-packages/keras/utils/generic_utils.pyc in deserialize_keras_object(identifier, module_objects, custom_objects, printable_module_name)
    155             if fn is None:
    156                 raise ValueError('Unknown ' + printable_module_name,
--> 157                                  ':' + function_name)
    158         return fn
    159     else:

ValueError: ('Unknown loss function', ':loss')

keras 2.0.4

Hi, I have a similar problem with:
model.compile( loss=lambda x,y: custom_loss_function(x,y,third_argument), optimizer=optimizer)
Keras 2.0.3 (Python 2.7.6) gives me the error:
('Unknown loss function', ':<lambda>')

Does anyone can help me ? Many thanks!

Hi @pigna90 , the easiest way is to define a python function in the form:

def custom_loss_function(y_true, y_pred):
    # Compute loss
    return loss

Then you pass it as your loss in model.compile(loss=custom_loss_function, [...]).

Thanks @dluvizon, but I need three parameters as I wrote in my first post. How can I handle it?
The error is raised only when I try to reload the model that has been saved.

Checkout functools.partial This is really basic Python, so I think this is
not the appropriate forum to discuss it.

On Tue, May 30, 2017, 8:55 AM Alessandro Romano notifications@github.com
wrote:

Thanks @dluvizon https://github.com/dluvizon, but I need three
parameters as I wrote in my first post. How can I handle it?
The error is raised only when I try to reload the model that has been
saved.


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/fchollet/keras/issues/5916#issuecomment-304924075,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AA3RW8F8kgL35_f7D3iXjj9kuVW6h0Opks5r_DwCgaJpZM4Mki1T
.

@piccolbo actually I'm using also functools.partial, but I need to execute the following code in save/serialization time and in loading time:
custom_loss_partial = functools.partial(custom_loss_function, third_argument=third_argument) custom_loss_partial.__name__ = "custom_loss_function"
I'm searching for a way that allows me to load the model without declare custom_loss_partial two times.

I hope that my issue it's relevant and clear.

@Bisgates
Hi, I tried your way but sadly it doesn't work. It just gives me the same error message:
File "/home/gnahzuy/.conda/envs/gpu/lib/python3.5/site-packages/keras/losses.py", line 94, in deserialize printable_module_name='loss function') File "/home/gnahzuy/.conda/envs/gpu/lib/python3.5/site-packages/keras/utils/generic_utils.py", line 159, in deserialize_keras_object ':' + function_name) ValueError: Unknown loss function:sorenson_dice

I compile my model like:
model.compile(optimizer='adam', loss=sorenson_dice)
and I load my model like:
model = keras.models.load_model('/home/gnahzuy/U-net/scripts/val_loss=.-0.88.hdf5',custom_objecta={'val_loss': sorenson_dice})
I define the loss function in the same file.

I think that might because I give the worry name 'val_loss'? I tried 'loss' but it doesn't work too.

Do you have any idea about that?
Thanks

@ZY0422 You want custom_objects={'sorenson_dice': sorenson_dice}

This issue has been automatically marked as stale because it has not had recent activity. It will be closed after 30 days if no further activity occurs, but feel free to re-open a closed issue if needed.

I have a similar problem
I compile my model like this:
model.compile(loss={'ctc': lambda y_true, y_pred: y_pred}, optimizer=sgd, metrics=['accuracy'])
when load_model('model.hdf5', custom_objects={'ctc': lambda y_true, y_pred: y_pred})
it raise "Unknown loss function: ''
Can anyone help me ,thank U very much!!!

@caocao1989
I'd guess you're using the CTC ocr example here, too.
I was able to use custom_objects = {'<lambda>': lambda y_true, y_pred: y_pred} as a work-around.
I hope this helps!

@SimulatedANeal
Yes,Thanks for your reply,I have solved the problem like your answer

@SimulatedANeal
Wow, great! But i wanna to know why this can work? I never see such kind of custom_objects structure before. Where did you see it?

@SimulatedANeal I am using ssd_keras loss from keras_loss_function.keras_ssd_loss import SSDLoss and again while loading the model I'm getting this error ValueError: Unknown loss function:compute_loss . I tried above methods to resolve but none of these worked. I am using Keras 2.1.3.

@piccolbo, As @pigna90 referred earlier, I am as well using a custom partial function which requires additional arguments. The third argument is actually an input node in the model. Attaching a snippet from the model corresponding to it:

def sparse_weighted_loss(target, output, weights):
      return tf.multiply(tf.keras.backend.sparse_categorical_crossentropy(target, output), weights)

weights_tensor = Input(shape=(None,), dtype='float32', name='weights_input')
lossFct = partial(sparse_weighted_loss, weights=weights_tensor)
update_wrapper(lossFct, sparse_weighted_loss)

I use lossFct as my custom loss function (which is basically a example-wise weighted cross-entropy loss). Now I redefine sparse_weighted_loss in the custom_objects as follows:

def sparse_weighted_loss(target, output, weights):
      return tf.multiply(tf.keras.backend.sparse_categorical_crossentropy(target, output), weights)
custom_obj = {}
custom_obj['sparse_weighted_loss'] = sparse_weighted_loss
model = keras.models.load_model(modelPath, custom_objects=custom_obj)

While loading the model, it still throws this error:

Traceback (most recent call last):
  File "Train_Product_NER_weighted_softmax.py", line 112, in <module>
    model = BiLSTM.loadModel(sys.argv[2])
  File "/BiLSTM_weightedloss.py", line 653, in loadModel
    model = keras.models.load_model(modelPath, custom_objects=custom_obj)
  File "/usr/local/lib/python3.4/site-packages/Keras-2.1.6-py3.4.egg/keras/models.py", line 388, in load_model
  File "/usr/local/lib/python3.4/site-packages/Keras-2.1.6-py3.4.egg/keras/engine/training.py", line 837, in compile
  File "/usr/local/lib/python3.4/site-packages/Keras-2.1.6-py3.4.egg/keras/engine/training.py", line 429, in weighted
TypeError: sparse_weighted_loss() missing 1 required positional argument: 'weights'

Any insights anyone here?

@TanmayParekh It's an unrelated issue to the one discussed here. I don't think you are going to find what you need if you use the issue tracking system this way. If it were my project, you'd have to open a separate issue. Sometimes user groups are a first line of support: https://groups.google.com/forum/#!forum/keras-users Good luck.

@piccolbo Thanks a lot. New to the community. This helps! :)

@TanmayParekh Have you solved your problem? I have the same error. Thanks.

I am getting same issue.
*** Model is compiled and saved: ******
`def triplet_loss(y_true, y_pred, alpha = 0.3):
anchor, positive, negative = y_pred[0], y_pred[1], y_pred[2]
pos_dist = tf.reduce_sum(tf.square(tf.subtract(anchor, positive)), axis=-1)
neg_dist = tf.reduce_sum(tf.square(tf.subtract(anchor, negative)), axis=-1)
basic_loss = tf.add(tf.subtract(pos_dist, neg_dist), alpha)
loss = tf.reduce_sum(tf.maximum(basic_loss, 0.0))
return loss

FRmodel.compile(optimizer = 'adam', loss = triplet_loss, metrics = ['accuracy'])
FRmodel.save('model.h5')`

**** loading model using: ****
FRmodel = load_model('model.h5')

****Error getting: *******
ValueError: Unknown loss function:triplet_loss

Can someone suggest solution for this please?
Thanks

@shuaiw24 I used a different hack here. Couldn't solve the original problem though.

I am only saving the weights of the model right now. During loading, I re-create the model and load the weights into the model. That works well for my problem.

I solved this problem by adding 'custom_bojects'

model = load_model('model/multi_task/try.h5', custom_objects={'loss_max': loss_max})

my loss function:

def loss_max(y_true, y_pred):
    from keras import backend as K
    return K.max(K.abs(y_pred - y_true), axis=-1)

This is the most elegant solution I've seen, thanks a lot!

https://github.com/keras-team/keras/issues/11821

keras 2.0.8, tf1.4

my loss is written something like this:

def segnet_loss_graph(y_true, y_pred):
return K.categorical_crossentropy(y_true, y_pred)

and the code is:
segnet_loss = KL.Lambda(
lambda x: segnet_loss_graph(*x), name="segnet_loss")(
[input_gt_segmap,seg_score_map])
outputs = [segnet_loss]
model = KM.Model(inputs, outputs, name='multi_task_segment')

and loss is adding like this

keras_model.add_loss(loss)

=======
cuz i save the weights and structure, i load model directly
keras.callbacks.ModelCheckpoint(checkpoint_path, verbose=0, save_weights_only=False)
...
...
here is the inference code
from keras.models import load_model
import tensorflow as tf
import keras
import keras.losses
keras.losses.custom_loss ="segnet_loss"
custom_objects= {
"backend":keras.backend,
"tf": tf,
"segnet_loss_graph": segnet_loss_graph
}

model = load_model(model_path,custom_objects=custom_objects)

it's no use if i change "segnet_loss_graph": segnet_loss_graph to any of the following stuff
"segnet_loss": segnet_loss_graph
"segnet_loss": lambda x: segnet_loss_graph(x)
"segnet_loss_graph": lambda x: segnet_loss_graph(
x)

the error will be following appears with different parameter i pass

1.ValueError: The model cannot be compiled because it has no loss to optimize.
2.NameError: name 'segnet_loss_graph' is not defined
3.TypeError: 'NoneType' object is not callable

is there any hint to solve this problem????

@SimulatedANeal @dluvizon
I compile my model like this:
model.compile(loss={'ctc': lambda y_true, y_pred: y_pred}, optimizer=adam)
After that I load model:
load_model('model.hdf5', custom_objects={'\ I worked, however val_loss is not impoved:
Before save:
Epoch 00001: val_loss improved from inf to 228.31433, saving model to ../model.hdf5
Epoch 00002: val_loss improved from 228.31433 to 176.43184

After save:
Epoch 00001: val_loss improved from inf to 117.94428, saving model to ../model.hdf5

==> why model is improved from 'inf' ?. Is it a problem ? how to fix ?

If you load model to continue training, you need to define your custom loss function:

def custom_loss(y_true, y_pred):
    # code
    return loss

model = load_model('model.hdf5', custom_objects={'custom_loss': custom_loss})

If you load model only for prediction (without training), you need to set compile flag to False:

model = load_model('model.hdf5', compile=False)

and you don`t need to define your custom_loss, because the loss function is not necessary for prediction
only.

I solved this problem by adding 'custom_bojects'

model = load_model('model/multi_task/try.h5', custom_objects={'loss_max': loss_max})

my loss function:

def loss_max(y_true, y_pred):
    from keras import backend as K
    return K.max(K.abs(y_pred - y_true), axis=-1)

My loss function is like this.

def custom_loss_function(inputs):

def custom_loss(y_true, y_pred):
    print (inputs.shape, y_true.shape, y_pred.shape)
    x =  k.exp(inputs[:,12,:])
    y_t = k.log(k.exp(y_true)*x)
    y_p = k.log(k.exp(y_pred)*x)
    # y_p[y_p==0] = 1e-6
    l = y_t - y_p
    return k.square(l)

return custom_loss

I am getting the following error
model=load_model('lstm_fftm_custom_loss.h5', custom_objects={'custom_loss':custom_loss})
NameError: name 'custom_loss' is not defined

Can you please help?

I solved this problem by adding 'custom_bojects'

model = load_model('model/multi_task/try.h5', custom_objects={'loss_max': loss_max})

my loss function:

def loss_max(y_true, y_pred):
    from keras import backend as K
    return K.max(K.abs(y_pred - y_true), axis=-1)

My loss function is like this.

def custom_loss_function(inputs):

def custom_loss(y_true, y_pred):
    print (inputs.shape, y_true.shape, y_pred.shape)
    x =  k.exp(inputs[:,12,:])
    y_t = k.log(k.exp(y_true)*x)
    y_p = k.log(k.exp(y_pred)*x)
    # y_p[y_p==0] = 1e-6
    l = y_t - y_p
    return k.square(l)

return custom_loss

I am getting the following error
model=load_model('lstm_fftm_custom_loss.h5', custom_objects={'custom_loss':custom_loss})
NameError: name 'custom_loss' is not defined

Can you please help?

@sayZeeel Try in your loss function to give it a name:

def custom_loss_function(inputs):
 def custom_loss(y_true, y_pred):
     print (inputs.shape, y_true.shape, y_pred.shape)
     x =  k.exp(inputs[:,12,:])
     y_t = k.log(k.exp(y_true)*x)
     y_p = k.log(k.exp(y_pred)*x)
     # y_p[y_p==0] = 1e-6
     l = y_t - y_p
     return k.square(l)
 custom_loss.__name__ = "Custom Loss"
 return custom_loss

and then:

model = load_model('model/multi_task/try.h5', custom_objects={'Custom Loss': custom_loss_function})

I solved this problem by adding 'custom_bojects'

model = load_model('model/multi_task/try.h5', custom_objects={'loss_max': loss_max})

my loss function:

def loss_max(y_true, y_pred):
    from keras import backend as K
    return K.max(K.abs(y_pred - y_true), axis=-1)

what if I have many losses!!! I have 7 clasifier with seven loss functions:
losses = {
"digit1_output": custom_loss.weighted_categorical_crossentropy(np.float32(weights[0])),
"digit2_output":custom_loss.weighted_categorical_crossentropy(np.float32(weights[1])),
"alphaNum_output": custom_loss.weighted_categorical_crossentropy(np.float32(weights_c)),
"digit3_output": custom_loss.weighted_categorical_crossentropy(np.float32(weights[2])),
"digit4_output": custom_loss.weighted_categorical_crossentropy(np.float32(weights[3])),
"digit5_output": custom_loss.weighted_categorical_crossentropy(np.float32(weights[4])),
"digit6_output": custom_loss.weighted_categorical_crossentropy(np.float32(weights[5])),
"digit7_output": custom_loss.weighted_categorical_crossentropy(np.float32(weights[6]))
}

Thanks @isaacgerg, your solution worked well. However, I had to re-train the model to save the json file since I had only saved my model as a h5 file.

I also tried @Pepslee's direct solution with compile=False option. It works perfectly, thanks a lot! This was the solution I actually needed, because I load the model only for prediction.

In short, both solutions work perfectly for different cases. Use @isaacgerg's solution if you're loading the model for training again (i.e. for transfer learning). Don't forget to store your model as json after training using:

model_json = model.to_json()
with open('model.json', "w") as json_file:
    json_file.write(model_json)

If your only aim is to predict, then use @Pepslee's solution. It's more straightforward.

I solved this problem by adding 'custom_bojects'

model = load_model('model/multi_task/try.h5', custom_objects={'loss_max': loss_max})

my loss function:

def loss_max(y_true, y_pred):
    from keras import backend as K
    return K.max(K.abs(y_pred - y_true), axis=-1)

@Bisgates I'm loading my model using the weights as such:

model = yolo_body(Input(shape=(None, None, 3)), 3, num_classes)
model.load_weights('logs/000/trained_weights_stage_1.h5')
model.compile(optimizer=Adam(lr=1e-4), loss={'yolo_loss': lambda y_true, y_pred: y_pred})

I can't add custom objects, is there a way to add yolo_loss, when loading the model using load_weights.
I am using https://github.com/qqwweee/keras-yolo3 github repository, with yolo_body and yolo_loss defined in yolov3.models.py file.

Hi, the same problem here.
One ugly solution that worked for me is to include the custom objective into keras:

import keras.losses
keras.losses.custom_loss = custom_loss

This is a know issue on keras 1 #3977.
On keras 2.0 you have to replace keras.objectives to keras.losses.

@dluvizon: Can you elaborate on how to add the custom_loss in keras2.0?

@TanmayParekh @shuaiw24, this is how you do it.

def sparse_weighted_loss_func(weights):

    def sparse_weighted_loss(target, output):
        return tf.multiply(tf.keras.backend.sparse_categorical_crossentropy(target, output), weights)
    return sparse_weighted_loss

model.compile(loss=sparse_weighted_loss_func(weights), ...)

# during loading time, it will expect sparse_weighted_loss not sparse_weighted_loss_func
model = load_model('pathtomodel',
                   custom_objects: {'sparse_weighted_loss': sparse_weighted_loss_func(weights)})

I trained and saved a model that uses a custom loss function (Keras version: 2.0.2):

model.compile(optimizer=adam, loss=SSD_Loss(neg_pos_ratio=neg_pos_ratio, alpha=alpha).compute_loss)

When I try to load the model, I get this error:

ValueError: ('Unknown loss function', ':compute_loss')

This is the stack trace:

ValueError                                Traceback (most recent call last)
<ipython-input-76-52ca495a8e09> in <module>()
----> 1 model, layer_dict, classifier_sizes = load_model('./model_0.h5')

/Users/pierluigiferrari/anaconda/envs/carnd-term1/lib/python3.5/site-packages/keras/models.py in load_model(filepath, custom_objects)
    258                   metrics=metrics,
    259                   loss_weights=loss_weights,
--> 260                   sample_weight_mode=sample_weight_mode)
    261 
    262     # Set optimizer weights.

/Users/pierluigiferrari/anaconda/envs/carnd-term1/lib/python3.5/site-packages/keras/engine/training.py in compile(self, optimizer, loss, metrics, loss_weights, sample_weight_mode, **kwargs)
    738             loss_functions = [losses.get(l) for l in loss]
    739         else:
--> 740             loss_function = losses.get(loss)
    741             loss_functions = [loss_function for _ in range(len(self.outputs))]
    742         self.loss_functions = loss_functions

/Users/pierluigiferrari/anaconda/envs/carnd-term1/lib/python3.5/site-packages/keras/losses.py in get(identifier)
     88     if isinstance(identifier, six.string_types):
     89         identifier = str(identifier)
---> 90         return deserialize(identifier)
     91     elif callable(identifier):
     92         return identifier

/Users/pierluigiferrari/anaconda/envs/carnd-term1/lib/python3.5/site-packages/keras/losses.py in deserialize(name, custom_objects)
     80                                     module_objects=globals(),
     81                                     custom_objects=custom_objects,
---> 82                                     printable_module_name='loss function')
     83 
     84 

/Users/pierluigiferrari/anaconda/envs/carnd-term1/lib/python3.5/site-packages/keras/utils/generic_utils.py in deserialize_keras_object(identifier, module_objects, custom_objects, printable_module_name)
    155             if fn is None:
    156                 raise ValueError('Unknown ' + printable_module_name,
--> 157                                  ':' + function_name)
    158         return fn
    159     else:

ValueError: ('Unknown loss function', ':compute_loss')
  • [x] Check that you are up-to-date with the master branch of Keras. You can update with:
    pip install git+git://github.com/fchollet/keras.git --upgrade --no-deps
  • [x] If running on TensorFlow, check that you are up-to-date with the latest version. The installation instructions can be found here.

The following code worked for me --

generator.compile(loss='mse', optimizer=opt, metrics=[perceptual_distance])
model = load_model("../input/srresnet-epoch-120/pretrain/model00000120.h5", custom_objects={'perceptual_distance': perceptual_distance})
weights = model.get_weights()
generator.set_weights(weights)

I solved this problem by adding 'custom_bojects'

model = load_model('model/multi_task/try.h5', custom_objects={'loss_max': loss_max})

my loss function:

def loss_max(y_true, y_pred):
    from keras import backend as K
    return K.max(K.abs(y_pred - y_true), axis=-1)

My loss function is like this.
def custom_loss_function(inputs):

def custom_loss(y_true, y_pred):
    print (inputs.shape, y_true.shape, y_pred.shape)
    x =  k.exp(inputs[:,12,:])
    y_t = k.log(k.exp(y_true)*x)
    y_p = k.log(k.exp(y_pred)*x)
    # y_p[y_p==0] = 1e-6
    l = y_t - y_p
    return k.square(l)

return custom_loss

I am getting the following error
model=load_model('lstm_fftm_custom_loss.h5', custom_objects={'custom_loss':custom_loss})
NameError: name 'custom_loss' is not defined
Can you please help?

@sayZeeel Try in your loss function to give it a name:

def custom_loss_function(inputs):
 def custom_loss(y_true, y_pred):
     print (inputs.shape, y_true.shape, y_pred.shape)
     x =  k.exp(inputs[:,12,:])
     y_t = k.log(k.exp(y_true)*x)
     y_p = k.log(k.exp(y_pred)*x)
     # y_p[y_p==0] = 1e-6
     l = y_t - y_p
     return k.square(l)
 custom_loss.__name__ = "Custom Loss"
 return custom_loss

and then:

model = load_model('model/multi_task/try.h5', custom_objects={'Custom Loss': custom_loss_function})

Giving the custom loss a name was the only thing that worked in tf 2.1

None of the above worked for me, but this did:

Load the model with compile=False and then compile it manually.

import tensorflow as tf
import tensorflow_hub as hub
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.optimizers import Adam

IMG_WIDTH = 224
IMG_HEIGHT = IMG_WIDTH
CHANNELS = 3
LEARNING_RATE = 1e-5
NUM_LABELS = 128

def create_model():
  feature_extractor_url = 'https://tfhub.dev/google/imagenet/mobilenet_v2_100_224/feature_vector/4'
  feature_extractor_layer = hub.KerasLayer(feature_extractor_url,
                                           input_shape=(IMG_HEIGHT, IMG_WIDTH, CHANNELS))
  feature_extractor_layer.trainable = False

  return Sequential([
    feature_extractor_layer,
    Dense(1024, activation='relu', name='hidden_layer'),
    Dense(NUM_LABELS, activation='sigmoid', name='output')
  ])

def load_model(model_dir):
  # got the compile=False idea from @Pepslee's comment:
  # https://github.com/keras-team/keras/issues/5916#issuecomment-457624404
  return tf.keras.models.load_model(model_dir,
                                    compile=False,
                                    custom_objects={'KerasLayer': hub.KerasLayer})
                                    # this didn't work.
                                    # neither did the custom_loss_function with __name__ thing.
                                    # custom_objects={'KerasLayer': hub.KerasLayer,
                                    #                 'custom_loss': custom_loss,
                                    #                 'custom_metric': custom_metric})

def train(prev_model):
  # the trick is to load the model with compile=False
  if prev_model:
    model = load_model(prev_model)
  else:
    model = create_model()

  # and then compile manually,
  # the same way it does with a new model.
  model.compile(
    optimizer=Adam(learning_rate=LEARNING_RATE),
    loss=custom_loss,
    metrics=[custom_metric]
  )

  # model.fit ...

# def custom_loss(y_true, y_pred):
#   ...

# def custom_metric(y_true, y_pred, threshold=0.5):
#   ...

I have a problem when I load this model: ValueError: Unknown loss function: can you help me please.

import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)

Input data files are available in the "../input/" directory.

For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory

import matplotlib.pyplot as plt
import os

Imports

import os
import fnmatch
import cv2
import numpy as np
import string
import time

from keras.preprocessing.sequence import pad_sequences
from keras.layers import Dense, LSTM, Reshape, BatchNormalization, Input, Conv2D
from keras.layers import MaxPool2D, Lambda, Bidirectional
from keras.models import Model
from keras.activations import relu, sigmoid, softmax
import keras.backend as K
from keras.utils import to_categorical
from keras.callbacks import ModelCheckpoint

import tensorflow as tf
from tensorflow.python.client import device_lib
import xml.etree.ElementTree as ET

read xml file

doc = ET.parse("/home/yosra/Downloads/IAMDataset/xml/a01-000u.xml")

root = doc.getroot()

dic = []
label = []

for i in root.iter('word'):

dic.append(i.get('id'))

label.append(i.get('text'))

print(dic, ' ' ,label)

Global Variables

char_list = string.ascii_letters + string.digits
print("Character List: ", char_list)

function to decode the text into indice of char list

def encode_to_labels(text):
# We encode each output word into digits
digit_list = []
for index, character in enumerate(text):
try:
digit_list.append(char_list.index(character))
except:
print("Error in finding index for character ", character)
#End For
return digit_list

preprocess the data

read the image from IAM Dataset

n_samples = len(os.listdir('/home/yosra/Desktop/imagetest'))

#Number of samples in xml file

xml_samples = len(dic)

list of trining_set

training_img = []
training_txt=[]
train_input_length = []
train_label_length = []
orig_txt = []

lists for validation dataset

valid_img = []
valid_txt = []
valid_input_length = []
valid_label_length = []
valid_orig_txt = []

max_label_len = 0

Training Variables

batch_size = 256
epochs = 10

k=1

for i, pic in enumerate(os.listdir('/home/yosra/Desktop/imagetest')):
# Read image as grayscale
img = cv2.imread(os.path.join('/home/yosra/Desktop/imagetest', pic), cv2.IMREAD_GRAYSCALE)

    pic_target = pic[:-4]
    # convert each image of shape (32, 128, 1)
    w, h = img.shape

    if h > 128 or w > 32:
        continue
    # endif

    # Process the images to bring them to scale
    if w < 32:
        add_zeros = np.ones((32-w, h))*255
        img = np.concatenate((img, add_zeros))
    # endif
    if h < 128:
        add_zeros = np.ones((32, 128-h))*255
        img = np.concatenate((img, add_zeros), axis=1)
    # endif    

    img = np.expand_dims(img , axis = 2)

    # Normalise the image
    img = img/255.

    # Get the text for the image
    txt = pic_target.split('_')[1]

    # compute maximum length of the text
    if len(txt) > max_label_len:
        max_label_len = len(txt)

    if k%10 == 0:     
        valid_orig_txt.append(txt)   
        valid_label_length.append(len(txt))
        valid_input_length.append(31)
        valid_img.append(img)
        valid_txt.append(encode_to_labels(txt))
    else:
        orig_txt.append(txt)   
        train_label_length.append(len(txt))
        train_input_length.append(31)
        training_img.append(img)
        training_txt.append(encode_to_labels(txt))
    k+=1

print('kamlna')

pad each output label to maximum text length

train_padded_txt = pad_sequences(training_txt, maxlen=max_label_len, padding='post', value = len(char_list))
valid_padded_txt = pad_sequences(valid_txt, maxlen=max_label_len, padding='post', value = len(char_list))

input with shape of height=32 and width=128

input with shape of height=32 and width=128

inputs = Input(shape=(32,128,1))

convolution layer with kernel size (3,3)

conv_1 = Conv2D(64, (3,3), activation = 'relu', padding='same')(inputs)

poolig layer with kernel size (2,2)

pool_1 = MaxPool2D(pool_size=(2, 2), strides=2)(conv_1)

conv_2 = Conv2D(128, (3,3), activation = 'relu', padding='same')(pool_1)
pool_2 = MaxPool2D(pool_size=(2, 2), strides=2)(conv_2)

conv_3 = Conv2D(256, (3,3), activation = 'relu', padding='same')(pool_2)

conv_4 = Conv2D(256, (3,3), activation = 'relu', padding='same')(conv_3)

poolig layer with kernel size (2,1)

pool_4 = MaxPool2D(pool_size=(2, 1))(conv_4)

conv_5 = Conv2D(512, (3,3), activation = 'relu', padding='same')(pool_4)

Batch normalization layer

batch_norm_5 = BatchNormalization()(conv_5)

conv_6 = Conv2D(512, (3,3), activation = 'relu', padding='same')(batch_norm_5)
batch_norm_6 = BatchNormalization()(conv_6)
pool_6 = MaxPool2D(pool_size=(2, 1))(batch_norm_6)

conv_7 = Conv2D(512, (2,2), activation = 'relu')(pool_6)

squeezed = Lambda(lambda x: K.squeeze(x, 1))(conv_7)

bidirectional LSTM layers with units=128

blstm_1 = Bidirectional(LSTM(128, return_sequences=True, dropout = 0.2))(squeezed)
blstm_2 = Bidirectional(LSTM(128, return_sequences=True, dropout = 0.2))(blstm_1)

outputs = Dense(len(char_list)+1, activation = 'softmax')(blstm_2)

model to be used at test time

act_model = Model(inputs, outputs)

act_model.summary()

the CTC loss fnction is to predict the output text, it is very helpfull for the

text recognition topic.

labels = Input(name='the_labels', shape=[max_label_len], dtype='float32')
input_length = Input(name='input_length', shape=[1], dtype='int64')
label_length = Input(name='label_length', shape=[1], dtype='int64')

def ctc_lambda_func(args):
y_pred, labels, input_length, label_length = args

return K.ctc_batch_cost(labels, y_pred, input_length, label_length)

loss_out = Lambda(ctc_lambda_func, output_shape=(1,), name='ctc')([outputs, labels, input_length, label_length])

model to be used at training time

model = Model(inputs=[inputs, labels, input_length, label_length], outputs=loss_out)

train the model

model.compile(loss={'ctc': lambda y_true, y_pred: y_pred}, optimizer = 'adam')
filepath= "/home/yosra/Downloads/best_model.hdf5"
checkpoint = ModelCheckpoint(filepath=filepath, monitor='val_loss', verbose=1, save_best_only=True, mode='auto')

callbacks_list = [checkpoint]
training_img = np.array(training_img)
train_input_length = np.array(train_input_length)
train_label_length = np.array(train_label_length)

valid_img = np.array(valid_img)
valid_input_length = np.array(valid_input_length)
valid_label_length = np.array(valid_label_length)

model.fit(x=[training_img, train_padded_txt, train_input_length, train_label_length],
y=np.zeros(len(training_img)), batch_size=batch_size,
epochs = epochs,
validation_data = ([valid_img, valid_padded_txt, valid_input_length, valid_label_length],[np.zeros(len(valid_img))]),
verbose = 1, callbacks = callbacks_list)

model.save(filepath)

test the model

from keras.models import load_model

load the saved best model weights

new_model = load_model(filepath)

When I load my model, I have this error: ValueError: Unknown loss function:, any help please???

pad each output label to maximum text length

train_padded_txt = pad_sequences(training_txt, maxlen=max_label_len, padding='post', value = len(char_list))
valid_padded_txt = pad_sequences(valid_txt, maxlen=max_label_len, padding='post', value = len(char_list))

input with shape of height=32 and width=128

input with shape of height=32 and width=128

inputs = Input(shape=(32,128,1))

convolution layer with kernel size (3,3)

conv_1 = Conv2D(64, (3,3), activation = 'relu', padding='same')(inputs)

poolig layer with kernel size (2,2)

pool_1 = MaxPool2D(pool_size=(2, 2), strides=2)(conv_1)

conv_2 = Conv2D(128, (3,3), activation = 'relu', padding='same')(pool_1)
pool_2 = MaxPool2D(pool_size=(2, 2), strides=2)(conv_2)

conv_3 = Conv2D(256, (3,3), activation = 'relu', padding='same')(pool_2)

conv_4 = Conv2D(256, (3,3), activation = 'relu', padding='same')(conv_3)

poolig layer with kernel size (2,1)

pool_4 = MaxPool2D(pool_size=(2, 1))(conv_4)

conv_5 = Conv2D(512, (3,3), activation = 'relu', padding='same')(pool_4)

Batch normalization layer

batch_norm_5 = BatchNormalization()(conv_5)

conv_6 = Conv2D(512, (3,3), activation = 'relu', padding='same')(batch_norm_5)
batch_norm_6 = BatchNormalization()(conv_6)
pool_6 = MaxPool2D(pool_size=(2, 1))(batch_norm_6)

conv_7 = Conv2D(512, (2,2), activation = 'relu')(pool_6)

squeezed = Lambda(lambda x: K.squeeze(x, 1))(conv_7)

bidirectional LSTM layers with units=128

blstm_1 = Bidirectional(LSTM(128, return_sequences=True, dropout = 0.2))(squeezed)
blstm_2 = Bidirectional(LSTM(128, return_sequences=True, dropout = 0.2))(blstm_1)

outputs = Dense(len(char_list)+1, activation = 'softmax')(blstm_2)

model to be used at test time

act_model = Model(inputs, outputs)

act_model.summary()

the CTC loss fnction is to predict the output text, it is very helpfull for the

text recognition topic.

labels = Input(name='the_labels', shape=[max_label_len], dtype='float32')
input_length = Input(name='input_length', shape=[1], dtype='int64')
label_length = Input(name='label_length', shape=[1], dtype='int64')

def ctc_lambda_func(args):
y_pred, labels, input_length, label_length = args

return K.ctc_batch_cost(labels, y_pred, input_length, label_length)

loss_out = Lambda(ctc_lambda_func, output_shape=(1,), name='ctc')([outputs, labels, input_length, label_length])

model to be used at training time

model = Model(inputs=[inputs, labels, input_length, label_length], outputs=loss_out)

train the model

model.compile(loss={'ctc': lambda y_true, y_pred: y_pred}, optimizer = 'adam')
filepath= "/home/yosra/Downloads/best_model.hdf5"
checkpoint = ModelCheckpoint(filepath=filepath, monitor='val_loss', verbose=1, save_best_only=True, mode='auto')

callbacks_list = [checkpoint]
training_img = np.array(training_img)
train_input_length = np.array(train_input_length)
train_label_length = np.array(train_label_length)

valid_img = np.array(valid_img)
valid_input_length = np.array(valid_input_length)
valid_label_length = np.array(valid_label_length)

model.fit(x=[training_img, train_padded_txt, train_input_length, train_label_length],
y=np.zeros(len(training_img)), batch_size=batch_size,
epochs = epochs,
validation_data = ([valid_img, valid_padded_txt, valid_input_length, valid_label_length],[np.zeros(len(valid_img))]),
verbose = 1, callbacks = callbacks_list)

model.save(filepath)

test the model

from keras.models import load_model

load the saved best model weights

new_model = load_model(filepath)

Stop posting into this thread!!! There's an answer above, and you're making it hard to find!

WORKAROUND

model = tf.keras.models.load_model(path_here, compile=False)

could anyone solve this problem please?
This is the definition of my loss function:

def my_loss(y_true,y_pred,lambda_const,i,T,task_size=2):
y_trueSoft=y_true[:,:(i)task_size]
y_predSoft=y_pred[:,:(i)
task_size]/T
y_trueHard=y_true[:,(i)task_size:]
y_predHard=y_pred[:,(i)
task_size:]

return lambda_constcategorical_crossentropy(y_trueSoft,y_predSoft) + (1-lambda_const)categorical_crossentropy(y_trueHard,y_predHard)

model_final.compile(loss=lambda y_true, y_pred: my_loss(y_true, y_pred, lambda_const,i,T), optimizer="adam", metrics=["acc"])

And when I want to load this model , I got this error,ValueError: Unknown loss function:.
I have changed the way I load this model, it writes below, but I still got the same error, I am so confused
### model_tmp=load_model(model_path_old,custom_objects={'angle bracket lambda angle bracket': lambda y_true, y_pred : my_loss(y_true, y_pred, lambda_const,i,T)})

Can anyone help me ? Thanks a lot

could anyone solve this problem please?
This is the definition of my loss function:

def my_loss(y_true,y_pred,lambda_const,i,T,task_size=2):
y_trueSoft=y_true[:,:(i)task_size]
y_predSoft=y_pred[:,:(i)
task_size]/T
y_trueHard=y_true[:,(i)task_size:]
y_predHard=y_pred[:,(i)
task_size:]

return lambda_constcategorical_crossentropy(y_trueSoft,y_predSoft) + (1-lambda_const)categorical_crossentropy(y_trueHard,y_predHard)

model_final.compile(loss=lambda y_true, y_pred: my_loss(y_true, y_pred, lambda_const,i,T), optimizer="adam", metrics=["acc"])

And when I want to load this model , I got this error,ValueError: Unknown loss function:.
I have changed the way I load this model, it writes below, but I still got the same error, I am so confused

model_tmp=load_model(model_path_old,custom_objects={'angle bracket lambda angle bracket': lambda y_true, y_pred : my_loss(y_true, y_pred, lambda_const,i,T)})

Can anyone help me ? Thanks a lot

You first need to load your model with compile=False, then compile it. Like this:
model=load_model(model_path_old,custom_objects={'angle bracket lambda angle bracket': lambda y_true, y_pred : my_loss(y_true, y_pred, lambda_const,i,T)}, compile=False) \\ model.compile(loss=lambda y_true, y_pred: my_loss(y_true, y_pred, lambda_const,i,T), optimizer="adam", metrics=["acc"])

could anyone solve this problem please?
This is the definition of my loss function:
def my_loss(y_true,y_pred,lambda_const,i,T,task_size=2):
y_trueSoft=y_true[:,:(i)task_size]
y_predSoft=y_pred[:,:(i)
task_size]/T
y_trueHard=y_true[:,(i)task_size:]
y_predHard=y_pred[:,(i)
task_size:]
return lambda_constcategorical_crossentropy(y_trueSoft,y_predSoft) + (1-lambda_const)categorical_crossentropy(y_trueHard,y_predHard)
model_final.compile(loss=lambda y_true, y_pred: my_loss(y_true, y_pred, lambda_const,i,T), optimizer="adam", metrics=["acc"])
And when I want to load this model , I got this error,ValueError: Unknown loss function:.
I have changed the way I load this model, it writes below, but I still got the same error, I am so confused

model_tmp=load_model(model_path_old,custom_objects={'angle bracket lambda angle bracket': lambda y_true, y_pred : my_loss(y_true, y_pred, lambda_const,i,T)})

Can anyone help me ? Thanks a lot

You first need to load your model with compile=False, then compile it. Like this:
model=load_model(model_path_old,custom_objects={'angle bracket lambda angle bracket': lambda y_true, y_pred : my_loss(y_true, y_pred, lambda_const,i,T)}, compile=False) \\ model.compile(loss=lambda y_true, y_pred: my_loss(y_true, y_pred, lambda_const,i,T), optimizer="adam", metrics=["acc"])

Thank you so much! But it didn't work. I still got this Unknown loss function:

could anyone solve this problem please?
This is the definition of my loss function:
def my_loss(y_true,y_pred,lambda_const,i,T,task_size=2):
y_trueSoft=y_true[:,:(i)task_size]
y_predSoft=y_pred[:,:(i)
task_size]/T
y_trueHard=y_true[:,(i)task_size:]
y_predHard=y_pred[:,(i)_task_size:]
return lambda_const_categorical_crossentropy(y_trueSoft,y_predSoft) + (1-lambda_const)
categorical_crossentropy(y_trueHard,y_predHard)
model_final.compile(loss=lambda y_true, y_pred: my_loss(y_true, y_pred, lambda_const,i,T), optimizer="adam", metrics=["acc"])
And when I want to load this model , I got this error,ValueError: Unknown loss function:.
I have changed the way I load this model, it writes below, but I still got the same error, I am so confused

model_tmp=load_model(model_path_old,custom_objects={'angle bracket lambda angle bracket': lambda y_true, y_pred : my_loss(y_true, y_pred, lambda_const,i,T)})

Can anyone help me ? Thanks a lot

You first need to load your model with compile=False, then compile it. Like this:
model=load_model(model_path_old,custom_objects={'angle bracket lambda angle bracket': lambda y_true, y_pred : my_loss(y_true, y_pred, lambda_const,i,T)}, compile=False) \\ model.compile(loss=lambda y_true, y_pred: my_loss(y_true, y_pred, lambda_const,i,T), optimizer="adam", metrics=["acc"])

Thank you so much! But it didn't work. I still got this Unknown loss function:

Custom objects should be called while compiling, so they should not be called in load_model. The above code piece is not the right answer, sorry. It should be:
model=load_model(model_path, compile=False) \\ model.compile(loss=lambda y_true, y_pred: my_loss(y_true, y_pred, lambda_const,i,T), optimizer="adam", metrics=["acc"])

could anyone solve this problem please?
This is the definition of my loss function:
def my_loss(y_true,y_pred,lambda_const,i,T,task_size=2):
y_trueSoft=y_true[:,:(i)task_size]
y_predSoft=y_pred[:,:(i)
task_size]/T
y_trueHard=y_true[:,(i)task_size:]
y_predHard=y_pred[:,(i)_task_size:]
return lambda_const_categorical_crossentropy(y_trueSoft,y_predSoft) + (1-lambda_const)
categorical_crossentropy(y_trueHard,y_predHard)
model_final.compile(loss=lambda y_true, y_pred: my_loss(y_true, y_pred, lambda_const,i,T), optimizer="adam", metrics=["acc"])
And when I want to load this model , I got this error,ValueError: Unknown loss function:.
I have changed the way I load this model, it writes below, but I still got the same error, I am so confused

model_tmp=load_model(model_path_old,custom_objects={'angle bracket lambda angle bracket': lambda y_true, y_pred : my_loss(y_true, y_pred, lambda_const,i,T)})

Can anyone help me ? Thanks a lot

You first need to load your model with compile=False, then compile it. Like this:
model=load_model(model_path_old,custom_objects={'angle bracket lambda angle bracket': lambda y_true, y_pred : my_loss(y_true, y_pred, lambda_const,i,T)}, compile=False) \\ model.compile(loss=lambda y_true, y_pred: my_loss(y_true, y_pred, lambda_const,i,T), optimizer="adam", metrics=["acc"])

Thank you so much! But it didn't work. I still got this Unknown loss function:

Custom objects should be called while compiling, so they should not be called in load_model. The above code piece is not the right answer, sorry. It should be:
model=load_model(model_path, compile=False) \\ model.compile(loss=lambda y_true, y_pred: my_loss(y_true, y_pred, lambda_const,i,T), optimizer="adam", metrics=["acc"])

Thank you so much. It works! Hope you have a nice day!

@weilinapple you reading comprehension is way below required minimum for an engineer, consider a different career path.

@Demetrio92 your communication and social skills are below required minimum for an engineer, consider a different career path.

model.compile(optimizer='adam', metrics=['accuracy'],
loss=tf.keras.losses.SparseCategoricalCrossentropy())

this code works

Was this page helpful?
0 / 5 - 0 ratings