Models: Conversion success, but detect.tflite returns weird results.

Created on 26 Oct 2020  路  12Comments  路  Source: tensorflow/models

Prerequisites

Please answer the following questions for yourself before submitting an issue.

  • [Y] I am reporting the issue to the correct repository. (Model Garden official or research directory)

1. The entire URL of the file you are using

https://github.com/tensorflow/models/tree/master/research/object_detection

2. Describe the bug

I fine-tuned an SSD-Mobilenet V2 model by Tensorflow-2.3, while converting the model to tflite by TF-Nightly. This works as I got a detect.tflite file of 11MB size. But when I use this model in my Android project, it outputs weird results, with extremely low confidences over all the 10 objects. As a comparison, I used Tensorflow-1.15 and TF-Nightly to generate a tflite model of SSD-Mobilenet V1, the result turned good.

3. Steps to reproduce

As the following description,

4. Expected behavior

I expect a detailed flow on how to generate a well working tflite model of SSD-Mobilenet V2.

5. Additional context

The bad results of SSD-Mobilenet V2,
image

The good results of SSD-Mobilenet V1,
image

6. System information

  • Linux Ubuntu 16.04, Android.
  • Simulator: Nexus_S_API_30
  • Python version: 3.7
research awaiting model gardener bug

All 12 comments

@OswinGuai you are sure that this is a duplicate? Here the Mobilenet SSD v2 doesn't work well after conversion and in #9287 the same model works well after conversion, but the ResNet doesn't work well. I think they are the same issue, but now this functionality doesn't work with Mobilenet SSD v2.
I can relate the same issue with Mobilenet SSD v2, I will make some tests with tf1 version.

@huberemanuel Sorry, my mistake, they are not duplicate. Your mention is right. I tried several times among versions but only got errors or wrong predictions. May you good luck.

@OswinGuai Can you give some pointers to your inference code? It seems like there is some difference in what the TF1 model did, vs TF2.

Guys, finally got some good results with the converted tflite mobilenet SSD model. Firstly, I was trying to make some tests with tf1, but I moved back to tf2 after the newer version of tf-nightly corrected the less than 1kb tflite model size. The model prediction time was also increased, previous I was facing an issue that the original model was really faster than the tflite model, which didn't make sense (again, the new version of tf-nightly).
Now speaking of qualitative results, else they maintained poor prediction results, while the original model was around 80% accurate, the tflite model was not even 10% accurate. The solution was to normalize the input image as follows:

ori_img = tf.keras.preprocessing.image.load_img(image_path, target_size=input_shape)
input_data = np.array(np.expand_dims(ori_img, 0))

# Important line
input_data = input_data / 128 - 1

After that, my tflite quantized model is presenting 80% accuracy just as the original model.
Hopefully, this helps you @OswinGuai to correct your issue.

If it helps, this is my tflite converting procedure:

model = tf.saved_model.load(saved_model_dir)
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir=saved_model_dir, signature_keys=['serving_default'])
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS,tf.lite.OpsSet.TFLITE_BUILTINS]
converter.experimental_new_converter = True
converter.allow_custom_ops = True
tflite_model = converter.convert()

@srjoglekar246 the input normalization is a needed operation? I could not found it in the tflite example

@OswinGuai Can you give some pointers to your inference code? It seems like there is some difference in what the TF1 model did, vs TF2.

@srjoglekar246 Excuse me for so late response. Here is what I tried for inferencing by TensorFlow 2.*.
image

By the way, there will be some other error if I do the inference like this:
image

The first way runs well, but results are wrong. The error of the second way says, _Input tensor has type kTfLiteFloat32_:
image

@huberemanuel It is cool. I will try it as soon as possible.

@huberemanuel Yes, you need to do preprocessing as mentioned here. Could you check if there is a difference between this & your method?

@srjoglekar246 the script you mentioned does the preprocessing, but I think not in a way I was trying to say, I will rephrase it.
When I run the inference on an image after training with a SavedModel format, I don't actually need to preprocess the input to get the results. I believe this preprocessing step is done by the network. However, when I do this inference with a tflite I need to do this preprocessing by hand (or calling a function that does it), but the model itself doesn't take care of this step, leading to bad results if you don't normalize your image.
I just found in exportf_tflite_graph_tf2.py that this normalization is indeed required, as it states "image: a float32 tensor of shape[1, height, width, 3] containing the normalized input image.", but this step is not done on the colab example script, so I think this script should be updated with this normalization (I can help with this), but more importantly, this normalization should be highlighted on the documentation.

@huberemanuel Agreed, the Inception pre-processing is a detail many people tend to miss. I will probably get to the documentation in a while, but (if you are interested), feel free to send a PR to our documentation.

Just correcting what I said earlier, the colab example is doing the preprocessing, my mistake. I will make a PR highlighting this needed procedure, thank you @srjoglekar246

@OswinGuai Could you post your (untrained) tflite model OR pipeline config for me to take a look? And I assume you are using the SSD Android example (after modifying the app code parameters such as input size, etc if required?)

Was this page helpful?
0 / 5 - 0 ratings

Related issues

hanzy123 picture hanzy123  路  3Comments

rakashi picture rakashi  路  3Comments

noumanriazkhan picture noumanriazkhan  路  3Comments

Mostafaghelich picture Mostafaghelich  路  3Comments

airmak picture airmak  路  3Comments