Please make sure that the boxes below are checked before you submit your issue. If your issue is an implementation question, please ask your question on StackOverflow or join the Keras Slack channel and ask there instead of filing a GitHub issue.
Thank you!
[x ] Check that you are up-to-date with the master branch of Keras. You can update with:
pip install git+git://github.com/keras-team/keras.git --upgrade --no-deps
[x ] If running on TensorFlow, check that you are up-to-date with the latest version. The installation instructions can be found here.
[ ] If running on Theano, check that you are up-to-date with the master branch of Theano. You can update with:
pip install git+git://github.com/Theano/Theano.git --upgrade --no-deps
[ ] Provide a link to a GitHub Gist of a Python script that can reproduce your issue (or just copy the script here if it is short).
To compare accuracies between the original examples from Keras and my results using modifications of these networks, I run the validation set of ILSVRCS2012, without the blacklisted files. my surprise is that the results were lower than those reported at the keras page. For instance, on Inceptionv3 I got 76.6 an 93.2 for top 1 and top5, versus the published 78.8 and 94.4, same for mobilenet.
What could this mean? I am running them on windows, on tensorflow. I cannot imagine any mistake that i could have done to change the accuracies (I have not changed the original code). Thanks!
I just read that some crop the image for testing, but what specific crop was use if this is the case?
I am having a similar problem with keras 2.1.6 when doing one of the "Introduction to Deep Learning" course assignments. I.e. - week3 / Fine-tuning InceptionV3 gives me validation accuracies way off the expected:
loss: 0.0520; acc: 0.9962; val_loss: 3.4239; val_acc: 0.3880: 26it [02:36, 17.82s/it]
After downgrading from keras 2.1.6 to 2.0.6, I received the expected results:
loss: 0.0938; acc: 0.9900; val_loss: 0.2343; val_acc: 0.9375: 26it [02:01, 13.27s/it]
Thanks Aleksey! Wow, that is a large difference, I will try what you did, I still wonder how that could be possible, does it mean that the in the two versions the same code is interpreted in a different way? any ideas?
@fernande2000 sorry, no idea. I am not a keras developer, I do not know what's inside. I can only confirm that this is true and cost me several hours of investigation as I am new to keras and did not expect this issue.
I ran tests with different keras versions and found that this problem was introduced in 2.1.3. 2.1.2 works fine.
In Keras 2.1.3, the behaviour of BatchNormalization layer changed and when trainable=False it no longer updates the mini-batch statistics. This update is actually a good thing because a frozen layer should not modify any of its parameters. Nevertheless this change along with another policy of BN (the use of mini-batch statistics instead of moving mean/var when frozen) can cause issues during fine-tuning.
It is likely but not 100% certain you are affected by this, so to confirm do the following:
If indeed you are affected try installing this Keras 2.1.6 fork and retrain the network. Let me know if the problem is fixed.
@datumbox Thanks for the answer, I cannot do what you asked because I did not train the model, I took the pretrained one from Keras with imagenet weights, and I just tested it on the Imagenet validation set and the accuraccy did not agree with the one reported on the keras: applications page.
@fernande2000 Understood. :)
I was actually hoping that @AlekseyMalyshev can run it as he presented training logs earlier.
@datumbox it needs a few hours to test. I'll see when I have time.
Most helpful comment
I am having a similar problem with keras 2.1.6 when doing one of the "Introduction to Deep Learning" course assignments. I.e. - week3 / Fine-tuning InceptionV3 gives me validation accuracies way off the expected:
After downgrading from keras 2.1.6 to 2.0.6, I received the expected results: