Mask_rcnn: UnboundLocalError: local variable 'image_id' referenced before assignment

Created on 19 Aug 2018  路  17Comments  路  Source: matterport/Mask_RCNN

fine-fine-fine and then
......

mrcnn_mask_deconv      (TimeDistributed)
mrcnn_class_logits     (TimeDistributed)
mrcnn_mask             (TimeDistributed)
/homes/el302/kates_tensorflow/lib/python3.4/site-packages/tensorflow/python/ops/gradients_impl.py:108: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.
  "Converting sparse IndexedSlices to a dense Tensor of unknown shape. "
/homes/el302/kates_tensorflow/lib/python3.4/site-packages/keras/engine/training.py:2033: UserWarning: Using a generator with `use_multiprocessing=True` and multiple workers may duplicate your data. Please consider using the`keras.utils.Sequence class.
  UserWarning('Using a generator with `use_multiprocessing=True`'
Traceback (most recent call last):
  File "/scratch/datasets/Mask_RCNN/mrcnn/model.py", line 1700, in data_generator
    image_index = (image_index + 1) % len(image_ids)
ZeroDivisionError: integer division or modulo by zero

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/homes/el302/kates_tensorflow/lib/python3.4/site-packages/keras/utils/data_utils.py", line 654, in _data_generator_task
    generator_output = next(self._generator)
  File "/scratch/datasets/Mask_RCNN/mrcnn/model.py", line 1818, in data_generator
    dataset.image_info[image_id]))
UnboundLocalError: local variable 'image_id' referenced before assignment
Traceback (most recent call last):
  File "/scratch/datasets/Mask_RCNN/mrcnn/model.py", line 1700, in data_generator
    image_index = (image_index + 1) % len(image_ids)
ZeroDivisionError: integer division or modulo by zero

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/homes/el302/kates_tensorflow/lib/python3.4/site-packages/keras/utils/data_utils.py", line 654, in _data_generator_task
    generator_output = next(self._generator)
  File "/scratch/datasets/Mask_RCNN/mrcnn/model.py", line 1818, in data_generator
    dataset.image_info[image_id]))
UnboundLocalError: local variable 'image_id' referenced before assignment
Epoch 1/30
 99/100 [============================>.] - ETA: 0s - loss: 2.9331 - rpn_class_loss: 0.1090 - rpn_bbox_loss: 0.4284 - mrcnn_class_loss: 1.0442 - mrcnn_bbox_loss: 0.6860 - mrcnn_mask_loss: 0.6655Traceback (most recent call last):
  File "/scratch/datasets/Mask_RCNN/samples/sun/sun.py", line 375, in <module>
    train(model)
  File "/scratch/datasets/Mask_RCNN/samples/sun/sun.py", line 210, in train
    layers='heads')
  File "/scratch/datasets/Mask_RCNN/mrcnn/model.py", line 2381, in train
    use_multiprocessing=True,
  File "/homes/el302/kates_tensorflow/lib/python3.4/site-packages/keras/legacy/interfaces.py", line 91, in wrapper
    return func(*args, **kwargs)
  File "/homes/el302/kates_tensorflow/lib/python3.4/site-packages/keras/engine/training.py", line 2195, in fit_generator
    workers=0)
  File "/homes/el302/kates_tensorflow/lib/python3.4/site-packages/keras/legacy/interfaces.py", line 91, in wrapper
    return func(*args, **kwargs)
  File "/homes/el302/kates_tensorflow/lib/python3.4/site-packages/keras/engine/training.py", line 2310, in evaluate_generator
    generator_output = next(output_generator)
  File "/homes/el302/kates_tensorflow/lib/python3.4/site-packages/keras/utils/data_utils.py", line 770, in get
    six.reraise(value.__class__, value, value.__traceback__)
  File "/homes/el302/kates_tensorflow/lib/python3.4/site-packages/six.py", line 693, in reraise
    raise value
UnboundLocalError: local variable 'image_id' referenced before assignment

I checked the annotation for polygons with the length zero, and they are all ok and equal.
Any ideas why this might happen? what to do with the local variable 'image_id'?

Most helpful comment

@gethubwy I am not sure what you mean by "instances_train2017.json" if you use Lableme then, for img1.jpg it will create img1.json file
You have to make sure that img1.jpg and img1.json belong in the same dir or set,it could be train or val set.

Thank you so much,I found that I accidentally deleted a jpg picture

All 17 comments

Have you run dataset.prepare() after you load the dataset?
Official example below:

dataset_train = NucleusDataset()
dataset_train.load_nucleus(dataset_dir, subset)
dataset_train.prepare()

Yes, I did it like:

def train(model):
    """Train the model."""
    # Training dataset.
    dataset_train = SunDataset()
    dataset_train.load_sun(args.dataset, "train")
    dataset_train.prepare()

    # Validation dataset
    dataset_val = SunDataset()
    dataset_val.load_sun(args.dataset, "val")
    dataset_val.prepare()

It even did the first epoch

Could you please post your sun.py here? I guess there's something about image_id got wrong.

I've checked your dataset, and I found that the file val/via_region_data.json contains only an empty dict. I guess it could be the cause of your problem, because your code fail at last step of first epoch, and that's where the validation starts.

@keineahnung2345 Thank you!! It solve my problem, absolutely happy.

@hateful-kate You're welcome.

@aegorfk How did you solve the problem?
I have Val_data of 56 samples but still facing the same error.

@EswarSaiKrish Well, my annotation file was empty. I did not notice that so now it is fixed

@aegorfk Hi,can you please show as how you fixed this problem

@keineahnung2345 same problem although my json file is not empty , can you suggest any solutions

@aegorfk Hi, can you please show as how you fixed this problem
Hi, I checked the file and placed the annotation file in the right directory.

I need more details of your code, actually. Some steps to check: do you multiclass classification and adapted the initial code from matterpor's for Multiclass Classification? Did you try to visualise some of the pics using notebooks from the matterport's code? Did you use standard annotation tool or converted to VGG annotation tool?

i use standard annotation ( coco 2014 dataset )
i found that the json file of the validation dataset is empty although it s downloaded from the official site

So the root causes are as following:

  1. The annotation file is empty (as mention by @keineahnung2345 and @aegorfk )
  2. The annotation file is invalid or has missing data
  3. The annotation or image file is missing from data ( they have to be in pair )

Possible reasons:

  • manually or scripting to divide data into train and test folder
  • improper deletion of data where either of one file has left undeleted

_PS: In my case, it was a silly mistake of misnaming 'val' folder to 'test' on data set_

Hope this helps, Thank you

  • The annotation file is invalid or has missing data

Hello,torta24x, I wonder how do you distinguish between a instances_train2017.json file and instances_val2017.json file.
I have labled data file, I want to train a new model starting from pre-trained COCO weights, but I don't know how to name the json file,and the vaild json file should like what format? Thank you.

@gethubwy I am not sure what you mean by "instances_train2017.json" if you use Lableme then, for img1.jpg it will create img1.json file
You have to make sure that img1.jpg and img1.json belong in the same dir or set,it could be train or val set.

@gethubwy I am not sure what you mean by "instances_train2017.json" if you use Lableme then, for img1.jpg it will create img1.json file
You have to make sure that img1.jpg and img1.json belong in the same dir or set,it could be train or val set.

Thank you so much,I found that I accidentally deleted a jpg picture

Was this page helpful?
0 / 5 - 0 ratings