Models: Using slim DatasetDataProvider to read an image with changable shape

Created on 30 Mar 2017  路  10Comments  路  Source: tensorflow/models

I'm doing detection on images of changeable shape.

provider = slim.dataset_data_provider.DatasetDataProvider(
        dataset,
        num_readers=FLAGS.num_readers,
        common_queue_capacity=1,
        common_queue_min=1)
[image, label, gt_masks, gt_boxes, ih, iw] = provider.get(['image', 'label',
                                                             'gt_masks', 'gt_boxes',
                                                             'height', 'width'])
# image' shape is unknown
image, label, gt_masks, gt_boxes, ih, iw = tf.train.batch(
        [image, label, gt_masks, gt_boxes, ih, iw],
        batch_size=1,
        num_threads=FLAGS.num_preprocessing_threads,
        enqueue_many=False,
        capacity=1)

It raised an error ValueError: All shapes must be fully defined: [TensorShape([Dimension(None), Dimension(None), Dimension(3)])
Does slim DatasetDataProvider support variable shape inputs?
What should I do?

Most helpful comment

This question is better asked on StackOverflow since it is not a bug or feature request. There is also a larger community that reads questions there. Thanks!

All 10 comments

If I just wrote it without tf.train.batch

# create default session ..
[image, label, gt_masks, gt_boxes, ih, iw] = provider.get(['image', 'label',
                                                               'gt_masks', 'gt_boxes',
                                                               'height', 'width'])
npimage = image.eval()
print (npimage)

It stucked some where for a long while..

You should make sure image has a specified shape by calling image.set_shape before passing it to tf.train.batch.

Oh sorry, didn't notice you mentioned "changeable shapes." I'm not sure whether what you're looking for is supported.

@nealwu Thanks.
According to this issue 2604.
I guess DatasetDataProvider + tf.train.batch cannot solve my problem, since tf.train.batch requires static shape. I need to find a way bypassing tf.train.batch

This question is better asked on StackOverflow since it is not a bug or feature request. There is also a larger community that reads questions there. Thanks!

But you might consider preprocessing your images to all be the same size as the most easy answer to your problem.

@CharlesShang I wonder if you find the solution to decode tfrecords into images which have different sizes. Would you like to share the solution? Thanks a lot.

@TJCVRS I have the same problem . how to solve it?

@gleefeng You can use sparse tensor to decode the tfrecords.

@MaybeShewill-CV A nice idea, but how to encode data with variable length after data augmentation. In my case, I have the variable number of bounding box, so I must sparse the data after data augmentation. Any solution? Thank you!

Was this page helpful?
0 / 5 - 0 ratings