Models: "UnicodeDecodeError" running infer_detections script

Created on 7 Apr 2018  路  12Comments  路  Source: tensorflow/models

System information

  • What is the top-level directory of the model you are using: faster_rcnn_resnet50_coco
  • Have I written custom code: I am using infer_detections.py unmodified
  • OS Platform and Distribution: Windows 10
  • TensorFlow installed from: binary
  • TensorFlow version: 1.5.0
  • CUDA/cuDNN version:
  • GPU model and memory: Intel HD Graphics 520, 8242 MB
  • Exact command to reproduce:

python -m infer_detections --input_tfrecord_paths=../data/coco_testdev.record --output_tfrecord_path=../data/inference --inference_graph=../model/fine_tuned_model/frozen_inference_graph.pb --discard_image_pixels

See my repository for the complete workflow.

Describe the problem

Running the infer_detections script with my frozen graph and test set in the following way:

python -m infer_detections --input_tfrecord_paths=../data/coco_testdev.record --output_tfrecord_path=../data/inference --inference_graph=../model/fine_tuned_model/frozen_inference_graph.pb --discard_image_pixels

throws the error UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 394: invalid start byte.

Source code / logs

The complete stack trace is:

Traceback (most recent call last):
  File "C:\ProgramData\Anaconda3\lib\runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "C:\ProgramData\Anaconda3\lib\runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "C:\Users\me\Documents\GitHub\TransferLearningWithTensorflowAPI\scripts\infer_detections.py", line 96, in <module>
    tf.app.run()
  File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\platform\app.py", line 124, in run
    _sys.exit(main(argv))
  File "C:\Users\me\Documents\GitHub\TransferLearningWithTensorflowAPI\scripts\infer_detections.py", line 74, in main
    image_tensor, FLAGS.inference_graph)
  File "C:\ProgramData\Anaconda3\lib\site-packages\object_detection-0.1-py3.6.egg\object_detection\inference\detection_inference.py", line 69, in build_inference_graph
    graph_content = graph_def_file.read()
  File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\lib\io\file_io.py", line 126, in read
    pywrap_tensorflow.ReadFromStream(self._read_buf, length, status))
  File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\lib\io\file_io.py", line 94, in _prepare_value
    return compat.as_str_any(val)
  File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\util\compat.py", line 106, in as_str_any
    return as_str(value)
  File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\util\compat.py", line 84, in as_text
    return bytes_or_text.decode(encoding)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 394: invalid start byte

Most helpful comment

@mldm4 and @fera0013 this is because the object_detection/export_inference_graph.py script wrote the protobuf in a binary format.

Hence, there is an error when object_detection/inference/infer_detections calls build_inference_graph under object_detection/inference/detection_inference.py:

Change (lines 68-69 under object_detection/inference/detection_inference.py )

with tf.gfile.Open(inference_graph_path, 'r') as graph_def_file:
    graph_content = graph_def_file.read()

to

with tf.gfile.Open(inference_graph_path, 'rb') as graph_def_file:
    graph_content = graph_def_file.read()

In short, add binary read flags.

I haven't checked if any other scripts need changes on this account. If so, I'll post an update to this reply.

There have been quite a few commits in the last few months to extend python 3 support. I guess this was missed.

@tensorflowbutler could you please review the pull requests for python 3 support? If this hasn't been a pull -request before, I'll request one.

All 12 comments

Thank you for your post. We noticed you have not filled out the following field in the issue template. Could you update them if they are relevant in your case, or leave them as N/A? Thanks.
Bazel version

Thanks for your answer.

What "following fields" haven't I filled out? You did not mention any.

You may assume that any missing field is N/A.

Hi, I am facing the same issue, did you find a solution @fera0013 ??
Thanks.

@mldm4 and @fera0013 this is because the object_detection/export_inference_graph.py script wrote the protobuf in a binary format.

Hence, there is an error when object_detection/inference/infer_detections calls build_inference_graph under object_detection/inference/detection_inference.py:

Change (lines 68-69 under object_detection/inference/detection_inference.py )

with tf.gfile.Open(inference_graph_path, 'r') as graph_def_file:
    graph_content = graph_def_file.read()

to

with tf.gfile.Open(inference_graph_path, 'rb') as graph_def_file:
    graph_content = graph_def_file.read()

In short, add binary read flags.

I haven't checked if any other scripts need changes on this account. If so, I'll post an update to this reply.

There have been quite a few commits in the last few months to extend python 3 support. I guess this was missed.

@tensorflowbutler could you please review the pull requests for python 3 support? If this hasn't been a pull -request before, I'll request one.

Thank you @varun19299, this has solved it for me (python 3.6).

Closing as this is resolved.

Just for clarification, the tf.record which should be provided in the 'input_tfrecord_path' argument refers to the tfrecord of the test set right?

Solo para aclarar, 驴 tf.recordcu谩l debe proporcionarse en el argumento 'input_tfrecord_path' se refiere al tfrecordconjunto de pruebas correcto?

YES

I heve a problent, i am used
%cd /content/gdrive/My Drive/EntrenarSSD/models/research/object_detection/inference
!python infer_detections.py\
--input_tfrecord_paths = /content/gdrive/My Drive/EntrenarSSD/models/research/object_detection/Test.record\
--output_tfrecord_path = /content/gdrive/My Drive/EntrenarSSD/models/research/object_detection/Guardar_inferencia/NOSE.record\
--inference_graph = /content/gdrive/My Drive/EntrenarSSD/models/research/object_detection/Guardar_inferencia/frozen_inference_graph.pb\
--discard_image_pixels

I have a next error

File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/lib/io/file_io.py", line 84, in _preread_check
compat.as_bytes(self.__name), 1024 * 512)
tensorflow.python.framework.errors_impl.NotFoundError: =; No such file or directory

Help me please

Issue while running the infer_detections.py

Original stack trace for 'ReaderReadV2':
  File "object_detection/inference/infer_detections.py", line 96, in <module>
    tf.app.run()
  File "/root/anaconda3/lib/python3.7/site-packages/tensorflow_core/python/platform/app.py", line 40, in run
    _run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
  File "/root/anaconda3/lib/python3.7/site-packages/absl/app.py", line 299, in run
    _run_main(main, args)

Tengo un problema, estoy usado
% cd / content / gdrive / My Drive / EntrenarSSD / models / research / object_detection / inference
! Python infer_detections.py
--input_tfrecord_paths = / content / gdrive / My Drive / EntrenarSSD / models / research / object_detection / Test.record
--output_tfrecord_path = / content / gdrive / My Drive / EntrenarSSD / models / research / object_detection / Guardar_inferencia / NOSE.record
--inference_graph = / content / gdrive / My Drive / EntrenarSSD / models / research / object_detection / Guardar_inferencia / frozen_inference_graph.pb
--discard_image_pixels

Tengo un proximo error

Archivo "/usr/local/lib/python3.6/dist-packages/tensorflow/python/lib/io/file_io.py", l铆nea 84, en _preread_check
compat.as_bytes (self .__ name), 1024 * 512)
tensorflow. python.framework.errors_impl.NotFoundError: =; El fichero o directorio no existe

Ayudame por favor

solve the problem by removing all the spaces

! Python infer_detections.py
--input_tfrecord_paths=/content/gdrive/My Drive/EntrenarSSD/models/research/ object_detection/Test.record
...
--discard_image_pixels

@mldm4 and @fera0013 this is because the object_detection/export_inference_graph.py script wrote the protobuf in a binary format.

Hence, there is an error when object_detection/inference/infer_detections calls build_inference_graph under object_detection/inference/detection_inference.py:

Change (lines 68-69 under object_detection/inference/detection_inference.py )

with tf.gfile.Open(inference_graph_path, 'r') as graph_def_file:
    graph_content = graph_def_file.read()

to

with tf.gfile.Open(inference_graph_path, 'rb') as graph_def_file:
    graph_content = graph_def_file.read()

In short, add binary read flags.

I haven't checked if any other scripts need changes on this account. If so, I'll post an update to this reply.

There have been quite a few commits in the last few months to extend python 3 support. I guess this was missed.

@tensorflowbutler could you please review the pull requests for python 3 support? If this hasn't been a pull -request before, I'll request one.

i tried this. Still facing the same error

Was this page helpful?
0 / 5 - 0 ratings