the same dataset I create of my own, doing well.
But after update to 2.1, in training, I got this error message:
ValueError Traceback (most recent call last)
2 learning_rate=config.LEARNING_RATE,
3 epochs=2,
----> 4 layers='all')
/mnt/disks/sdb/Mask_RCNN/mrcnn/model.py in train(self, train_dataset, val_dataset, learning_rate, epochs, layers, augmentation)
2312 max_queue_size=100,
2313 workers=workers,
-> 2314 use_multiprocessing=True,
2315 )
2316 self.epoch = max(self.epoch, epochs)
~/tensorflow_GPU/lib/python3.5/site-packages/keras/legacy/interfaces.py in wrapper(args, *kwargs)
89 warnings.warn('Update your ' + object_name +
90 ' call to the Keras 2 API: ' + signature, stacklevel=2)
---> 91 return func(args, *kwargs)
92 wrapper._original_function = func
93 return wrapper
~/tensorflow_GPU/lib/python3.5/site-packages/keras/engine/training.py in fit_generator(self, generator, steps_per_epoch, epochs, verbose, callbacks, validation_data, validation_steps, class_weight, max_queue_size, workers, use_multiprocessing, shuffle, initial_epoch)
2222 outs = self.train_on_batch(x, y,
2223 sample_weight=sample_weight,
-> 2224 class_weight=class_weight)
2225
2226 if not isinstance(outs, list):
~/tensorflow_GPU/lib/python3.5/site-packages/keras/engine/training.py in train_on_batch(self, x, y, sample_weight, class_weight)
1875 x, y,
1876 sample_weight=sample_weight,
-> 1877 class_weight=class_weight)
1878 if self.uses_learning_phase and not isinstance(K.learning_phase(), int):
1879 ins = x + y + sample_weights + [1.]
~/tensorflow_GPU/lib/python3.5/site-packages/keras/engine/training.py in _standardize_user_data(self, x, y, sample_weight, class_weight, check_array_lengths, batch_size)
1474 self._feed_input_shapes,
1475 check_batch_axis=False,
-> 1476 exception_prefix='input')
1477 y = _standardize_input_data(y, self._feed_output_names,
1478 output_shapes,
~/tensorflow_GPU/lib/python3.5/site-packages/keras/engine/training.py in _standardize_input_data(data, names, shapes, check_batch_axis, exception_prefix)
121 ': expected ' + names[i] + ' to have shape ' +
122 str(shape) + ' but got array with shape ' +
--> 123 str(data_shape))
124 return data
125
ValueError: Error when checking input: expected input_image_meta to have shape (13,) but got array with shape (14,)
Could anyone help me with this?
Thanks a lot
The error doesn't tell me what might be wrong. But, generally, the length of the meta data vector is determined dynamically, so check the compose_image_meta() function in model.py. Also, are you on release 2.1 or are you on the latest master branch?
Playing with "shapes" example too. Changing NUM_CLASSES from 4 to something else results into "ValueError: Error when checking input: expected input_image_meta to have shape (212,) but got array with shape (16,)"
@waleedka I am on the latest mater branch.
Hi, can anyone who solved this please share how you did it? Thanks.
The error is clearly happening due to a mismatch in the size of the image_meta tensor. The size of this tensor is computed in the Config class. That's where I'd start looking and try to trace it to see why the expected image_meta size is different from what the model is getting.
@kvigulis, just in case you were still struggling...
I ran into the same issue and started tracking input_image_meta all over model.py until I realized that I was setting NUM_CLASSES in Config too late, i.e. after instantiating the class!
I was getting the dimension mismatch error because of doing something like this:
config = MyConfig()
config.NUM_CLASSES = 7
So, if this is your case, below is the way around it.
The expected length of input_image_meta corresponds to IMAGE_META_SIZE, which is computed in the __init__ method of Config based on the value of NUM_CLASSES (as hinted by @waleedka). Assuming that you are sub-classing Config to override default parameters, you need to set NUM_CLASSES first and then let Config.__init__() compute IMAGE_META_SIZE with the correct information at its disposal.
For example, I solved with something as simple as the following...
Define your configuration:
class MyConfig(Config):
# Setting other parameters...
def __init__(self, num_classes):
self.NUM_CLASSES = num_classes
super().__init__()
And then use it:
config = MyConfig(num_classes=7)
I think @mminervini got to the root of it. Changing NUM_CLASSES after the config object is created would cause this error.
By the way, unless you need to change the number of classes dynamically, which I would expect is a very rare case, you can update the config in a simpler way like this::
class MyConfig(Config):
NUM_CLASSES = 17
config = MyConfig()
@waleedka I've done like that but I still got this
ValueError: Error when checking input: expected input_image_meta to have shape (16,) but got array with shape (93,)
Can u help me ?
@waleedka Thanks bro.
@qnkhuat try this:
After line 2412 in mrccnn/model.py ("image_meta = compose_image_meta...") add "image_meta = image_meta[0:16]". This is not an elegant solution but works fine for me. I think the problem is the number of classes in your config file (in debug mode consider to check the function compose_image_meta). Also, try to set config.NUM_CLASSES = 1 exactly before executing model.detect
Sorry for the delay, a little busy, I didn’t solve the issue , after pulling the latest repo, the problem disappeared.
发送自 Windows 10 版邮件https://go.microsoft.com/fwlink/?LinkId=550986应用
发件人: kvigulis notifications@github.com
发送时间: Wednesday, April 11, 2018 11:04:08 AM
收件人: matterport/Mask_RCNN
抄送: zi cao; State change
主题: Re: [matterport/Mask_RCNN] ValueError: Error when checking input: expected input_image_meta to have shape (13,) but got array with shape (14,) (#410)
Hi caozi, how did you solve the issue?
―
You are receiving this because you modified the open/close state.
Reply to this email directly, view it on GitHubhttps://github.com/matterport/Mask_RCNN/issues/410#issuecomment-380311789, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AC6bcsiZGPGEHDaM3jR4TBOLivQpzCQ5ks5tnXKogaJpZM4TLJwO.
@arturjordao Thanks for the suggestion. It worked in removing the following error:
ValueError: Error when checking input: expected input_image_meta to have shape (13,) but got array with shape (14,)
I was modifying balloon.py under samples/balloon to detect more than one type of object.
Changed the NUM_CLASSES variable under BalloonConfig class in balloon.py, line 69, to reflect the number of different object instances I am trying to detect.
eg. Originally, balloon.py detects only two objects: BG (background) + balloon
NUM_CLASSES = 1 + 1
After modification, balloon.py detects three objects: BG + balloon + pen
NUM_CLASSES = 1 + 2
In case this helps anyone, I had the same issue and this is how I solved it;
I created my CustomConfig from template. Now because I had my "coco style dataset", I didn't need to add classes manually like so:
#override
def load_coco(self, dataset_dir, subset=None, class_ids=None, return_coco=False):
# Add classes
#The following two lines were comented out/removed
#self.add_class("element", 1, "class_a")
#self.add_class("element", 2, "class_b")
This solved the issue of "duplicate classes", but now I am facing another issue, this is the output:
Epoch 1/20
2018-09-20 14:10:32.841730: F ./tensorflow/core/util/cuda_launch_config.h:127] Check failed: work_element_count > 0 (0 vs. 0)
I assume classes were not properly initialized or something along those line (or I could have remnants of wrong "source" being "element" and not "coco" in my case). Anyhow, I will update this comment once/if I figure it out.
Waleed, first of all this is just great! Thankyou So Much as I really learned the most by reviewing your code!!
Ran into the same error (setting 1000 Targets), forgot that background is always 0. So changing NUM_CLASSES to 1001 in config, resolved this issue.
Hi,
Even I am getting the same error. I think it is because of my classes is greater than the 80 classes.
Error when checking input: expected input_image_meta to have shape (104,) but got array with shape (15,)
I had the same error. The way I fixed it was to change my load_dataset function so that I added one class for every declared class in my config. (I had 32 classes, so I used self.add_class for 1-31 in the load_dataset function)
In my case I was trying to finetune model with certain class (do not need all 81 classes, was interested only in one). So I resolved the problem by adding
for i in range(1, len(class_names)):
self.add_class("mysource", i, class_names[i])
where class_names is defined in demo.py (list of all class labels).
Loop should start from 1 because class id 0 is BG.
Then at load_mask function I make sure that I return mask with appropriate id
in my case i use
np.ones([mask.shape[-1]], dtype=np.int32) * desired_id
Using this method I have to load the whole model, without removing last layers.
@kvigulis, just in case you were still struggling...
I ran into the same issue and started tracking input_image_meta all over
model.pyuntil I realized that I was setting NUM_CLASSES inConfigtoo late, i.e. after instantiating the class!
I was getting the dimension mismatch error because of doing something like this:config = MyConfig() config.NUM_CLASSES = 7So, if this is your case, below is the way around it.
The expected length of input_image_meta corresponds to IMAGE_META_SIZE, which is computed in the
__init__method ofConfigbased on the value of NUM_CLASSES (as hinted by @waleedka). Assuming that you are sub-classingConfigto override default parameters, you need to set NUM_CLASSES first and then letConfig.__init__()compute IMAGE_META_SIZE with the correct information at its disposal.
For example, I solved with something as simple as the following...
Define your configuration:class MyConfig(Config): # Setting other parameters... def __init__(self, num_classes): self.NUM_CLASSES = num_classes super().__init__()And then use it:
config = MyConfig(num_classes=7)
This solution works for me. Thank you very much. You will also need to (add num_classes=7) to the inference config below.
I know it seems silly, but even knowing it I forgot to add the extra class for the background. Check that you didn't make the same mistake.
NUM_CLASSES = your number of classes + 1
...hey, wait. Don't down-vote me yet ... I was just trying to be helpful! :-p
Most helpful comment
@kvigulis, just in case you were still struggling...
I ran into the same issue and started tracking input_image_meta all over
model.pyuntil I realized that I was setting NUM_CLASSES inConfigtoo late, i.e. after instantiating the class!I was getting the dimension mismatch error because of doing something like this:
So, if this is your case, below is the way around it.
The expected length of input_image_meta corresponds to IMAGE_META_SIZE, which is computed in the
__init__method ofConfigbased on the value of NUM_CLASSES (as hinted by @waleedka). Assuming that you are sub-classingConfigto override default parameters, you need to set NUM_CLASSES first and then letConfig.__init__()compute IMAGE_META_SIZE with the correct information at its disposal.For example, I solved with something as simple as the following...
Define your configuration:
And then use it: