Using Keras to finetune a CNN image classifier layerwise is a pretty common use case, but it takes time to figure out which layer numbers to unfreeze every time.
Per the documentation page for the applications, this is the way to do it for Inception V3 if we are finetuning only the last 2 blocks, but more detailed information would be useful.
# we chose to train the top 2 inception blocks, i.e. we will freeze
# the first 249 layers and unfreeze the rest:
for layer in model.layers[:249]:
layer.trainable = False
for layer in model.layers[249:]:
layer.trainable = True
[x] Check that you are up-to-date with the master branch of Keras. You can update with:
pip install git+git://github.com/keras-team/keras.git --upgrade --no-deps
[x] If running on TensorFlow, check that you are up-to-date with the latest version. The installation instructions can be found here.
[ ] If running on Theano, check that you are up-to-date with the master branch of Theano. You can update with:
pip install git+git://github.com/Theano/Theano.git --upgrade --no-deps
[x] Provide a link to a GitHub Gist of a Python script that can reproduce your issue (or just copy the script here if it is short).
If anyone is interested, I dug up the numbers for Inception v3 as implemented in Keras.
Inception v3 Block Number | Layer Number | Freeze Layers Till | Number of Trainable Blocks
--|----|----|---
0 | 40 | 41 | 10
1 | 63 | 64 | 9
2 | 86 | 87 | 8
3 | 100 | 101 | 7
4 | 132 | 133 | 6
5 | 164 | 165 | 5
6 | 196 | 197 | 4
7 | 228 | 229 | 3
8 | 248 | 249 | 2
Most helpful comment
If anyone is interested, I dug up the numbers for Inception v3 as implemented in Keras.
Inception v3 Block Number | Layer Number | Freeze Layers Till | Number of Trainable Blocks
--|----|----|---
0 | 40 | 41 | 10
1 | 63 | 64 | 9
2 | 86 | 87 | 8
3 | 100 | 101 | 7
4 | 132 | 133 | 6
5 | 164 | 165 | 5
6 | 196 | 197 | 4
7 | 228 | 229 | 3
8 | 248 | 249 | 2