Models: offline_eval_map_corloc: TypeError (expected str instance, bytes found)

Created on 26 Jan 2018  路  12Comments  路  Source: tensorflow/models

System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow): see _Source code / logs_
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu 16.04.3 LTS (Linux 69c136cba5b6 4.4.38-rt49-tegra #1 SMP PREEMPT RT Tue Jul 25 09:26:02 PDT 2017 aarch64 aarch64 aarch64 GNU/Linux)
  • TensorFlow installed from (source or binary): source
  • TensorFlow version (use command below):
    tf.VERSION = 1.4.0
    tf.GIT_VERSION = v1.4.0-0-gd752244
    tf.COMPILER_VERSION = v1.4.0-0-gd752244
  • Python version: Python 3.5.2 [numpy (1.13.3); protobuf (3.5.0.post1)]
  • Bazel version (if compiling from source): 0.8.1
  • GCC/Compiler version (if compiling from source): 5.4.0 20160609
  • CUDA/cuDNN version: CUDA 8, cuDNN 5
  • GPU model and memory: Pascal GP106, 4GB Memory
  • Exact command to reproduce:
python object_detection/metrics/offline_eval_map_corloc.py \
        --eval_dir=work/train_eval/kitti_val1 \
        --eval_config_path=validation_eval_config.pbtxt \
        --input_config_path=validation_input_config_KITTI.pbtxt

Describe the problem

_Previous steps:_
My aim is to calculate the AP (for the class 'car') from models of the zoo on the KITTI dataset. Therefor I used the create_kitti_tf_record.py (dataset_tools). See point A in the logs below. While generating the tfrecord, i faced the same issue as #3239. The solution SamDon87 mentioned worked for me (see A3).
For the Inference (see point B) I successfully processed the smaller val tfrecord (200 images).

_Error:_
After executing the offline_eval_map_corloc.py (see "Exact command to reproduce", further information at point C1 and C2 in the logs) I get the following Error message:

INFO:tensorflow:Processing file: work/inference/kitticar_val1_detections.tfrecord-00000-of-00001
INFO:tensorflow:Processed 0 images...
Traceback (most recent call last):
  File "metrics/offline_eval_map_corloc.py", line 173, in <module>
    tf.app.run(main)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/platform/app.py", line 48, in run
    _sys.exit(main(_sys.argv[:1] + flags_passthrough))
  File "metrics/offline_eval_map_corloc.py", line 166, in main
    metrics = read_data_and_evaluate(input_config, eval_config)
  File "metrics/offline_eval_map_corloc.py", line 119, in read_data_and_evaluate
    decoded_dict = data_parser.parse(example)
  File "/drive/models/research/object_detection/metrics/tf_example_parser.py", line 149, in parse
    results_dict[key] = parser.parse(tf_example)
  File "/drive/models/research/object_detection/metrics/tf_example_parser.py", line 49, in parse
    self.field_name].HasField("bytes_list") else None
TypeError: sequence item 0: expected str instance, bytes found

_What I tried_
To ensure that the error is not caused by the tfrecord I generated, I used another (non kitti) tfrecord. Unfortunately, the same error occurred.

Now I'm stuck and I don't know how to solve this issue.
I am grateful for any help.

Source code / logs

A1 - Command to create tfrecord

python object_detection/dataset_tools/create_kitti_tf_record_mod.py \
        --data_dir=kitti/original/data_dir \
        --output_path=datasets/kitti/original/kitticar \
        --classes_to_use=car \
        --label_map_path=datasets/kitti/kitti_label_map.pbtxt \
        --validation_set_size=200

A2 - kitti_label_map.pbtxt

item {
  id: 1
  name: 'car'
}

A3 - line 61 to 64 of the modified create_kitti_tf_record_mod.py

tf.app.flags.DEFINE_string('classes_to_use', ['car', 'pedestrian', 'dontcare'],
                         'Which classes of bounding boxes to use. Adding the'
                         'dontcare class will remove all bboxs in the dontcare'
'regions.')

B - infer_detections command

python object_detection/inference/infer_detections.py \
  --input_tfrecord_paths=datasets/kitti/original/kitticar_val.tfrecord \
  --output_tfrecord_path=work/inference/kitticar_val1_detections.tfrecord-00000-of-00001 \
  --inference_graph=ssd_inception_v2_coco_2017_11_17/frozen_inference_graph.pb \
  --discard_image_pixels

C1 - validation_input_config_KITTI.pbtxt

label_map_path: 'datasets/kitti/kitti_label_map.pbtxt'
tf_record_input_reader: { input_path: 'work/inference/kitticar_val1_detections.tfrecord@1' }

C2 - validation_eval_config.pbtxt

metrics_set: 'pascal_voc_metrics'
support

Most helpful comment

I had same error with Python3.5 and TF1.5. This causes tf_example.features.feature[self.field_name].bytes_list.value returns byte type instead of string type in metrics/tf_example_parser.StringParser.

So I changed tf_example_parser.StringParser below

class StringParser(data_parser.DataToNumpyParser):
  """Tensorflow Example string parser."""

  def __init__(self, field_name):
    self.field_name = field_name

  def parse(self, tf_example):
    if tf_example.features.feature[self.field_name].HasField("bytes_list"):
        result = tf_example.features.feature[self.field_name].bytes_list.value
        result = "".join([x if type(x)=='str' else x.decode('utf-8') for x in result])
    else:
        result = None
    return result

I got no errors. But this is instance solution, I think there is better solution.

All 12 comments

Could you please try using TensorFlow 1.5 and let us know if you still see the problem?

Thanks for your answer, I will try Tf 1.5 today and report.

_Just a general question:_ Is the way, -> frozen graph from the model zoo -> create inference with the graph & eval images -> run evaluation, the propriate way to receive evaluation results from a frozen graph?


update: Just installed TF 1.5.0 and executed the same command. Unfortunately I receive the identical error:

INFO:tensorflow:Processing file: work/inference/kitticar_val1_detections.tfrecord-00000-of-00001
INFO:tensorflow:Processed 0 images...
Traceback (most recent call last):
  File "metrics/offline_eval_map_corloc.py", line 173, in <module>
    tf.app.run(main)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/platform/app.py", line 124, in run
    _sys.exit(main(argv))
  File "metrics/offline_eval_map_corloc.py", line 166, in main
    metrics = read_data_and_evaluate(input_config, eval_config)
  File "metrics/offline_eval_map_corloc.py", line 119, in read_data_and_evaluate
    decoded_dict = data_parser.parse(example)
  File "/drive/models/research/object_detection/metrics/tf_example_parser.py", line 149, in parse
    results_dict[key] = parser.parse(tf_example)
  File "/drive/models/research/object_detection/metrics/tf_example_parser.py", line 49, in parse
    self.field_name].HasField("bytes_list") else None
TypeError: sequence item 0: expected str instance, bytes found

Hello,
I just gave it another try. I found the nice raccoon repo ( https://github.com/datitran/raccoon_dataset/ ) and used these files, because I thought the error might have come from my tfrecord files.

At first I took the raccoon train.record and the graph of ssd_inception_v2_coco (directly from the modelzoo). I ran infer_detections.py and received the detection.tfrecord. Next I tried to evaluate with offline_eval_map_corloc.py but the same error (TypeError: sequence item 0: expected str instance, bytes found) showed up.

After that I replaced the ssd_inception_v2_coco with the ssd_mobilenet_v1_coco graph and repeated the steps -> the same error occurred (TypeError)

Then I ran the infer_detections.py with the raccoon train.record and the raccoon output_inference_graph.pb from the repo. ... -> offline_eval_map_corloc.py -> the same error (TypeError)


If anyone else has an idea about what I could try out, let me know.

@tombstone Can you take a look at this?

I had same error with Python3.5 and TF1.5. This causes tf_example.features.feature[self.field_name].bytes_list.value returns byte type instead of string type in metrics/tf_example_parser.StringParser.

So I changed tf_example_parser.StringParser below

class StringParser(data_parser.DataToNumpyParser):
  """Tensorflow Example string parser."""

  def __init__(self, field_name):
    self.field_name = field_name

  def parse(self, tf_example):
    if tf_example.features.feature[self.field_name].HasField("bytes_list"):
        result = tf_example.features.feature[self.field_name].bytes_list.value
        result = "".join([x if type(x)=='str' else x.decode('utf-8') for x in result])
    else:
        result = None
    return result

I got no errors. But this is instance solution, I think there is better solution.

I tried ohnabes solution and it fixed the error for me.
Thank you very much for your support!

Great. @ohnabe thank you for your help!

@ohnabe

I've used your method, except with:

result = "".join([x if type(x)=='str' else x.decode('utf-8','ignore') for x in result])

to handle bits which can't be parsed.

@varun19299 Thanks! Great!

Thanks a lot. This worked

Hi
used the below command to print the confusion matrix
python confusionmatrix.py --detections_record=test.record --label_map=training/labelmap.pbtxt --output_path=confusion_matrix.csv

I got the confusion matrix but the rows are columns are showing as zero. Precision and recall are showing as NAN. Please verify the below screenshot to verify the issue. Thanks
image

hello, I am using the script (https://www.shiftedup.com/2018/10/10/confusion-matrix-in-object-detection-api-with-tensorflow) , I generated the confuction matrix, only I have two classes and the matrix has a size of 3x3, which is wrong. someone could explain me. Thank you

matriz

Confusion Matrix:
[[1. 1. 7.]
[0. 3. 3.]
[2. 1. 0.]]

category ... [email protected]
0 Glasses ... 0.111111
1 Pen ... 0.500000

[2 rows x 3 columns]

Was this page helpful?
0 / 5 - 0 ratings

Related issues

nmfisher picture nmfisher  路  3Comments

chenyuZha picture chenyuZha  路  3Comments

rakashi picture rakashi  路  3Comments

airmak picture airmak  路  3Comments

dsindex picture dsindex  路  3Comments