Error when run the webcam or notebook demo
AssertionError Traceback (most recent call last)
<ipython-input-16-ba06eae97bcc> in <module>()
1 # compute predictions
----> 2 predictions = coco_demo.run_on_opencv_image(image)
3 imshow(predictions)
/Users/an.tran/Desktop/map-workspace/code/maskrcnn-benchmark/demo/predictor.pyc in run_on_opencv_image(self, image)
167 the BoxList via `prediction.fields()`
168 """
--> 169 predictions = self.compute_prediction(image)
170 top_predictions = self.select_top_predictions(predictions)
171
/Users/an.tran/Desktop/map-workspace/code/maskrcnn-benchmark/demo/predictor.pyc in compute_prediction(self, original_image)
212 # in the image, as defined by the bounding boxes
213 masks = prediction.get_field("mask")
--> 214 masks = self.masker(masks, prediction)
215 prediction.add_field("mask", masks)
216 return prediction
/Users/an.tran/Desktop/map-workspace/code/maskrcnn-benchmark/maskrcnn_benchmark/modeling/roi_heads/mask_head/inference.pyc in __call__(self, masks, boxes)
183
184 # Make some sanity check
--> 185 assert len(boxes) == len(masks), "Masks and boxes should have the same length."
186
187 # TODO: Is this JIT compatible?
AssertionError: Masks and boxes should have the same length.
Steps to reproduce the behavior:
Mask_R-CNN_demo.ipynbPlease copy and paste the output from the
environment collection script from PyTorch
(or fill out the checklist below manually).
You can get the script and run it with:
wget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py
# For security purposes, please check the contents of collect_env.py before running it.
python collect_env.py
Collecting environment information...
PyTorch version: 1.0.0.dev20181119
Is debug build: No
CUDA used to build PyTorch: None
OS: Mac OSX 10.13.6
GCC version: Could not collect
CMake version: Could not collect
Python version: 2.7
Is CUDA available: No
CUDA runtime version: No CUDA
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
Versions of relevant libraries:
[pip] 18.0
[conda] Could not collect
Same problems here!
@keineahnung2345 Seems the most decent master branch has the logic error.
But I cloned last night work fine...................... No idea what changed here. But I can also reproduce this logic error.
@fmassa
@keineahnung2345 Currently, you can change it back to this logic:
class Masker(object):
"""
Projects a set of masks in an image on the locations
specified by the bounding boxes
"""
def __init__(self, threshold=0.5, padding=1):
self.threshold = threshold
self.padding = padding
def forward_single_image(self, masks, boxes):
boxes = boxes.convert("xyxy")
im_w, im_h = boxes.size
res = [
paste_mask_in_image(mask[0], box, im_h, im_w, self.threshold, self.padding)
for mask, box in zip(masks, boxes.bbox)
]
if len(res) > 0:
res = torch.stack(res, dim=0)[:, None]
else:
res = masks.new_empty((0, 1, masks.shape[-2], masks.shape[-1]))
return res
def __call__(self, masks, boxes):
# TODO do this properly
if isinstance(boxes, BoxList):
boxes = [boxes]
assert len(boxes) == 1, "Only single image batch supported"
result = self.forward_single_image(masks, boxes[0])
return result
I merged a PR yesterday that is the culprit. I'll fix this in the next 2h. Sorry for the trouble!
@fmassa Hope to hear your responses soon! :-) Just try to dig into problems by myself. Could not find a clue.
@antran89 @fmassa @jinfagang Hello, I tried to fix this problem and open a PR in my own repo: https://github.com/keineahnung2345/maskrcnn-benchmark/pull/1/files.
But this will make my kernel restart. I guess it's because results = torch.stack(results, dim=0) takes too much space.
I hope this can help someone who is also trying to fix this issue.
I've fixed this issue in #187
Let me know if you still face the same issue, and thanks for reporting it!
@fmassa it solved!
I encountered it when I cloned and installed https://github.com/Maosef/maskrcnn-benchmark today, as I need the fix of this PR: https://github.com/facebookresearch/maskrcnn-benchmark/pull/271
Supposely @Maosef 's master is based on a recent enough master from maskrcnn-benchmark so this shouldn't have happened.
Anyway it is fixed by manually editing predictor.py according to this: https://github.com/facebookresearch/maskrcnn-benchmark/pull/187/commits/7cf1d982527d8fb5ded4bd1b4eae77517bb122ef
@mattans you can see that Maosef branch is 22 commits behind master, and it doesn't have the fix in https://github.com/facebookresearch/maskrcnn-benchmark/commit/7cf1d982527d8fb5ded4bd1b4eae77517bb122ef