Keras: freezing weights of a model

Created on 23 Sep 2017  路  1Comment  路  Source: keras-team/keras

Looking at the following code:
https://github.com/fchollet/keras/blob/master/examples/mnist_acgan.py#161

the combined model's generator can be trained, but it share the same model between the discriminator defined before.
If we change the parameter of trainable = False after the model has been compiled, are the weights of model trainable or frozen? Can the discriminator be trained?

Thank you!

Most helpful comment

I have this question too. As per the text on https://keras.io/applications/,

# this is the model we will train
model = Model(inputs=base_model.input, outputs=predictions)

# first: train only the top layers (which were randomly initialized)
# i.e. freeze all convolutional InceptionV3 layers
for layer in base_model.layers:
    layer.trainable = False

# compile the model (should be done *after* setting layers to non-trainable)
model.compile(optimizer='rmsprop', loss='categorical_crossentropy')

which seems to suggest the model needs to be re-compiled every time we change the parameter trainable = False to trainable = True (or vice-versa). However, when the model is shared like in this case, how does it work?

>All comments

I have this question too. As per the text on https://keras.io/applications/,

# this is the model we will train
model = Model(inputs=base_model.input, outputs=predictions)

# first: train only the top layers (which were randomly initialized)
# i.e. freeze all convolutional InceptionV3 layers
for layer in base_model.layers:
    layer.trainable = False

# compile the model (should be done *after* setting layers to non-trainable)
model.compile(optimizer='rmsprop', loss='categorical_crossentropy')

which seems to suggest the model needs to be re-compiled every time we change the parameter trainable = False to trainable = True (or vice-versa). However, when the model is shared like in this case, how does it work?

Was this page helpful?
0 / 5 - 0 ratings