Models: Object detection API: Inference and evaluation incompatible issue on Windows Systems

Created on 25 Nov 2017  路  4Comments  路  Source: tensorflow/models

System information

  • What is the top-level directory of the model you are using:
    research/object_detection
  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow):
    no
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
    Win 10, Version 1709 (Build 16299.64) (sorry for that)
  • TensorFlow installed from (source or binary):
    bin
  • TensorFlow version (use command below):
    tried with 1.4.0 and 1.5.0.dev20171115
  • CUDA/cuDNN version:
    CUDA 8.0.61.2, cuDNN 6.0
  • GPU model and memory:
    GTX 1080ti 11 GB
  • other packages:
    python 3.6.3, protobuf 3.5.0.post1, six 1.11.0
    also tried with python3-protobuf 2.5.0
  • Exact command to reproduce:
    python object_detection/inference/infer_detections.py --input_tfrecord_paths=P:\\data\\test_nurLos.record --output_tfrecord_path=detections.tfrecord --inference_graph=P:\\Anaconda3\\envs\\tfgpu\\Lib\\site-packages\\tensorflow\\models\\research\\output_inference_graph\\frozen_inference_graph.pb --discard_image_pixels

Describe the problem

I trained a model on my own dataset/record files and exported the inference graph sucessfully. Now I would like to infer the detections and evaluate the model from the same record file I used for evaluation with eval.py and tensorboard with the new nice tutorial. When I try to infer an UnicodeDecodeError is rising (see first prob Stacktrace).
I then changend the encoding of the function as_text in file tensorflow\python\util\compat.py from utf-8 to latin1. But then the problem "TypeError: unsupported operand type(s) for &: 'str' and 'int'" occur.

I read about workarounds to use python 2.7. But for win users there exists no tensorflow version for python 2.7. I the tried to run the script with another protobuf version (python3-protobuf 2.5.0). But no success

Source code / logs

used conversion script for the record files

` def split(df, group):
data = namedtuple('data', ['filename', 'object'])
gb = df.groupby(group)
return [data(filename, gb.get_group(x)) for filename, x in zip(gb.groups.keys(), gb.groups)]

def create_tf_example(group, path):
with tf.gfile.GFile(os.path.join(path, '{}'.format(group.filename)), 'rb') as fid:
encoded_jpg = fid.read()
encoded_jpg_io = io.BytesIO(encoded_jpg)
image = Image.open(encoded_jpg_io)
width, height = image.size

filename = group.filename.encode('utf8')
image_format = b'jpg'
xmins = []
xmaxs = []
ymins = []
ymaxs = []
classes_text = []
classes = []

for index, row in group.object.iterrows():
    xmins.append(row['xmin'] / width)
    xmaxs.append(row['xmax'] / width)
    ymins.append(row['ymin'] / height)
    ymaxs.append(row['ymax'] / height)
    classes_text.append(row['class'].encode('utf8'))
    classes.append(class_text_to_int(row['class']))

tf_example = tf.train.Example(features=tf.train.Features(feature={
    'image/height': dataset_util.int64_feature(height),
    'image/width': dataset_util.int64_feature(width),
    'image/filename': dataset_util.bytes_feature(filename),
    'image/source_id': dataset_util.bytes_feature(filename),
    'image/encoded': dataset_util.bytes_feature(encoded_jpg),
    'image/format': dataset_util.bytes_feature(image_format),
    'image/object/bbox/xmin': dataset_util.float_list_feature(xmins),
    'image/object/bbox/xmax': dataset_util.float_list_feature(xmaxs),
    'image/object/bbox/ymin': dataset_util.float_list_feature(ymins),
    'image/object/bbox/ymax': dataset_util.float_list_feature(ymaxs),
    'image/object/class/text': dataset_util.bytes_list_feature(classes_text),
    'image/object/class/label': dataset_util.int64_list_feature(classes),
}))
return tf_example

def main(_):
writer = tf.python_io.TFRecordWriter(FLAGS.output_path)
path = os.path.join(os.getcwd(), 'images')
examples = pd.read_csv(FLAGS.csv_input)
grouped = split(examples, 'filename')
for group in grouped:
tf_example = create_tf_example(group, path)
writer.write(tf_example.SerializeToString())

writer.close()
output_path = os.path.join(os.getcwd(), FLAGS.output_path)
print('Successfully created the TFRecords: {}'.format(output_path))

`

First prob:

Traceback (most recent call last):
File "object_detection/inference/infer_detections.py", line 96, in
tf.app.run()
File "P:\Anaconda3\envs\tfnightlygpulibsite-packages\tensorflow\python\platform\app.py", line 129, in run
_sys.exit(main(argv))
File "object_detection/inference/infer_detections.py", line 74, in main
image_tensor, FLAGS.inference_graph)
File "P:\Anaconda3\envs\tfnightlygpu\Libsite-packages\tensorflowmodels\research\object_detection\inference\detection_inference.py", line 69, in build_inference_graph
graph_content = graph_def_file.read()
File "P:\Anaconda3\envs\tfnightlygpulibsite-packages\tensorflow\pythonlib\io\file_io.py", line 126, in read
pywrap_tensorflow.ReadFromStream(self._read_buf, length, status))
File "P:\Anaconda3\envs\tfnightlygpulibsite-packages\tensorflow\pythonlib\io\file_io.py", line 94, in _prepare_value
return compat.as_str_any(val)
File "P:\Anaconda3\envs\tfnightlygpulibsite-packages\tensorflow\python\util\compat.py", line 106, in as_str_any
return as_str(value)
File "P:\Anaconda3\envs\tfnightlygpulibsite-packages\tensorflow\python\util\compat.py", line 84, in as_text
return bytes_or_text.decode(encoding)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 41: invalid start byte

Second prob:

`INFO:tensorflow:Reading graph and building model...
Traceback (most recent call last):
File "P:\Anaconda3\envs\tfnightlygpulibsite-packages\google\protobuf\internal\python_message.py", line 1083, in MergeFromString
if self._InternalParse(serialized, 0, length) != length:
File "P:\Anaconda3\envs\tfnightlygpulibsite-packages\google\protobuf\internal\python_message.py", line 1105, in InternalParse
(tag_bytes, new_pos) = local_ReadTag(buffer, pos)
File "P:\Anaconda3\envs\tfnightlygpulibsite-packages\google\protobuf\internal\decoder.py", line 181, in ReadTag
while six.indexbytes(buffer, pos) & 0x80:
TypeError: unsupported operand type(s) for &: 'str' and 'int'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "object_detection/inference/infer_detections.py", line 96, in
tf.app.run()
File "P:\Anaconda3\envs\tfnightlygpulibsite-packages\tensorflow\python\platform\app.py", line 129, in run
_sys.exit(main(argv))
File "object_detection/inference/infer_detections.py", line 74, in main
image_tensor, FLAGS.inference_graph)
File "P:\Anaconda3\envs\tfnightlygpu\Libsite-packages\tensorflowmodels\research\object_detection\inference\detection_inference.py", line 71, in build_inference_graph
graph_def.MergeFromString(graph_content)
File "P:\Anaconda3\envs\tfnightlygpulibsite-packages\google\protobuf\internal\python_message.py", line 1089, in MergeFromString
raise message_mod.DecodeError('Truncated message.')
google.protobuf.message.DecodeError: Truncated message.`

Most helpful comment

I received a similar issue when running object_detection/inference/infer_detections.py using Python 3.6.3 (Anaconda Python) on both Linux and OSX. The error message was the same as your first problem, namely UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 41: invalid start byte

I discovered that the export graph instructions listed here create a graph in a binary format. The instructions from that page are:

python object_detection/export_inference_graph.py \
--input_type image_tensor \
--pipeline_config_path ${PIPELINE_CONFIG_PATH} \
--trained_checkpoint_prefix ${TRAIN_PATH} \
--output_directory output_inference_graph.pb

In order to correct for this, I had to change object_detection/detection_inference.py to expect to read in a binary file. To reproduced this change update line 68 (in the build_inference_graph function) from:
with tf.gfile.Open(inference_graph_path, 'r') as graph_def_file:

to

with tf.gfile.Open(inference_graph_path, 'rb') as graph_def_file:

All 4 comments

I received a similar issue when running object_detection/inference/infer_detections.py using Python 3.6.3 (Anaconda Python) on both Linux and OSX. The error message was the same as your first problem, namely UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 41: invalid start byte

I discovered that the export graph instructions listed here create a graph in a binary format. The instructions from that page are:

python object_detection/export_inference_graph.py \
--input_type image_tensor \
--pipeline_config_path ${PIPELINE_CONFIG_PATH} \
--trained_checkpoint_prefix ${TRAIN_PATH} \
--output_directory output_inference_graph.pb

In order to correct for this, I had to change object_detection/detection_inference.py to expect to read in a binary file. To reproduced this change update line 68 (in the build_inference_graph function) from:
with tf.gfile.Open(inference_graph_path, 'r') as graph_def_file:

to

with tf.gfile.Open(inference_graph_path, 'rb') as graph_def_file:

I have a very basic question. Can someone please explain the steps to carry out inference and evaluation to get the mAP and IOU after running the object detection API with tensorflow 1.4.1. I have a trained model which is detecting objects in the images. But have no idea in finding the mAP and IOU.

All help is greatly appreciated!

Hi There,
We are checking to see if you still need help on this, as this seems to be considerably old issue. Please update this issue with the latest information, code snippet to reproduce your issue and error you are seeing.
If we don't hear from you in the next 7 days, this issue will be closed automatically. If you don't need help on this issue any more, please consider closing this.

Was this page helpful?
0 / 5 - 0 ratings