So, I ran the Keras example code for using the inception-v3 model and the predictions are way off. I guess there is an error in the weights. Does someone know why is this happening. I tested other images on the inception-v3 model and it is giving the same predictions for every different image. Any insight on the problem will be apreciated.
I am using:
Keras 2.0.4, Python 3.5 (64 bit)
https://github.com/fchollet/keras/blob/master/keras/applications/inception_v3.py
This is the code I am running:
import numpy as np
from keras.applications.inception_v3 import InceptionV3
from keras.preprocessing import image
from keras.applications.imagenet_utils import preprocess_input, decode_predictions
if __name__ == '__main__':
model = InceptionV3(include_top=True, weights='imagenet')
img_path = 'elephant.jpg'
img = image.load_img(img_path, target_size=(299, 299))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
preds = model.predict(x)
print('Predicted:', decode_predictions(preds))
The result given is:
Predicted: [[('n01924916', 'flatworm', 0.99995065), ('n03047690', 'clog', 4.9389007e-05), ('n04366367', 'suspension_bridge', 1.075191e-08), ('n01665541', 'leatherback_turtle', 2.5111552e-10), ('n03950228', 'pitcher', 6.6290827e-11)]]
When I run the same image through the ResNet50 model, it gives out this results:
Predicted: [[('n02504458', 'African_elephant', 0.59942758), ('n01871265', 'tusker', 0.33637413), ('n02504013', 'Indian_elephant',
0.061940487), ('n02397096', 'warthog', 0.0016048651), ('n02396427', 'wild_boar', 0.00016479047)]]
I had this problem too. But I was able to fix it by passing the numpy.expand_dims output thru
keras.applications.inception_v3.preprocess_input
instead of using
keras.applications.imagenet_utils.preprocess_input
as you are.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed after 30 days if no further activity occurs, but feel free to re-open a closed issue if needed.
Most helpful comment
I had this problem too. But I was able to fix it by passing the numpy.expand_dims output thru
keras.applications.inception_v3.preprocess_input
instead of using
keras.applications.imagenet_utils.preprocess_input
as you are.