Keras: Different metrics/losses for multiple outputs with shared models

Created on 25 Dec 2017  路  4Comments  路  Source: keras-team/keras

I have a small keras model S which I reuse several times in a bigger model B. I take the different outputs of S and want to apply different losses/metrics to all of them, but Keras doesn't let me because all the outputs are given the same name because they're all outputs of S. How can I get around this?

Example:

from keras.layers import Input, Dense, add
from keras.models import Model

# S model
inputs = Input(shape=(100,))
out = Dense(5, activation='relu')(inputs)
out = Dense(100)(out)
S = Model(inputs=inputs, outputs=out)

# B model
inputs = Input(shape=(100,))
out1 = S(inputs)
out2 = S(out1)
out3 = S(out2)
out4 = add([out1, out2, out3])
out4 = S(out4)
B = Model(inputs=inputs, outputs=[out1, out2, out3, out4])

# I want the following losses/metrics to apply:
  # out1: [loss_1, metric_1]
  # out2: [loss_2, metric_2]
  # out3: [loss_3]
  # out4: [loss_4, metric_4_1, metric_4_2]
B.compile(optimizer='rmsprop', 
    loss=[loss_1, loss_2, loss_3, loss_4],
    loss_weights=[1.0, 1.0, 1.0, 1.0],
    metrics=[metric_1, metric_2, None, [metric_4_1, metric_4_2]])
# The above compilation doesn't achieve the desired result

I know the example is probably a completely useless network, but it conveys the problem simply.

Most helpful comment

Hello @chausies,
I think I've had a similar problem with a VGG submodel.
I solved the issue by adding Lambda layer mapping the identity with different names.
Then you can use the dictionary to map correctly the losses.
With you code, it should be something like that :

`
out2 = S(out1)
out3 = S(out2)

out2_custom = Lambda(lambda x:x, name = "out2_custom")(out2)
out3_custom = Lambda(lambda x:x, name = "out3_custom")(out3)
`
Hope this helps
Ciao

All 4 comments

Hi, maybe I can help. I've worked a bit with multi-outputs networks before. Can you give the error message, or expected output (and actual output).
Thank you.

@gabrieldemarmiesse The main problem is how to point metrics to desired loss that when he is using a shared network to calculate a loss.
Usually it is easily done by using dictionary
"""
B.compile(..., metrics={'out1':'metric_1', 'out2':'metric_2', 'out4':'metric_4_1', 'out4': 'metric_4_2'})
"""
The above method only works if the output layers have different layer names , hence we can point specific metric to specific layer names.

But now in this case the S model he use for his output is a shared model. Thus the output layer names are the same. This results in inability for the metric compiler to point metric to different areas where model S is at.

Hello @chausies,
I think I've had a similar problem with a VGG submodel.
I solved the issue by adding Lambda layer mapping the identity with different names.
Then you can use the dictionary to map correctly the losses.
With you code, it should be something like that :

`
out2 = S(out1)
out3 = S(out2)

out2_custom = Lambda(lambda x:x, name = "out2_custom")(out2)
out3_custom = Lambda(lambda x:x, name = "out3_custom")(out3)
`
Hope this helps
Ciao

@MrMey TKS!! It works !!!!!!!!!!!
Larry

Was this page helpful?
0 / 5 - 0 ratings

Related issues

fredtcaroli picture fredtcaroli  路  3Comments

KeironO picture KeironO  路  3Comments

zygmuntz picture zygmuntz  路  3Comments

braingineer picture braingineer  路  3Comments

snakeztc picture snakeztc  路  3Comments