Keras: ValueError: Input 0 is incompatible with layer conv2d_1: expeced ndim=4, found ndim=5

Created on 13 Aug 2018  路  2Comments  路  Source: keras-team/keras

Hi,
I am trying to pass a RGB image from a simulator into my custom neural network. At the source of the RGB generation (simulator), the dimension of RGB image is (3,144,256).

This is how i construct the neural network:

rgb_model = Sequential()
rgb = env.shape() // this is (3, 144, 256)
rgb_shape = (1,) + rgb
rgb_model.add(Conv2D(96, (11, 11), strides=(3, 3), padding='valid', activation='relu', input_shape=rgb_shape, data_format = "channels_first"))

Now, my rbg_shape is (1, 3, 144, 256).

This is the error that i get:
_rgb_model.add(Conv2D(96, (11, 11), strides=(3, 3), padding='valid', activation='relu', input_shape=rgb_kshape, data_format = "channels_first"))
File "/usr/local/lib/python2.7/dist-packages/keras/engine/sequential.py", line 166, in add
layer(x)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/base_layer.py", line 414, in __call__
self.assert_input_compatibility(inputs)
File "/usr/local/lib/python2.7/dist-packages/keras/engine/base_layer.py", line 311, in assert_input_compatibility
str(K.ndim(x)))
ValueError: Input 0 is incompatible with layer conv2d_1: expected ndim=4, found ndim=5_

Why is keras complaining that the expected dimension is 5 when my actual dimension is 4?

Thanks

Most helpful comment

I was receiving the same error with my data but I solved it. I used and will suggest, layer by layer debugging. I was miscalculating some dimensions and hence the kernel sizes that I was using were leading up to this error. You might need to modify the kernel sizes and strides. Here is how I went about it.
I had N samples and each sample had the shape (7, 7, 2) where 2 is the number of channels. In your case, it would be (2, 7, 7). I commented out the rest of the model except the first layer, and printed the model summary which you can see as print(rgb_model.summary()). When you do this and repeat similar debugging by uncommenting one layer each time, you will be able to see the output size in the summary. It will help you with comparing your theoretical architecture with the actual code.
I also realized that I was using the wrong arguments in MaxPool2D(). I was confusing between the pool_size and the strides.

All 2 comments

@reinforcelearn Can you check the input_shape in rgb_model.add(Conv2D(96, (11, 11), strides=(3, 3), padding='valid', activation='relu', input_shape=rgb_kshape, data_format = "channels_first"))? Keras layers' input_shape doesn't include batch dimension, so len(input_shape) should be 3 for Conv2D, it seams your input_shape is 4.

I was receiving the same error with my data but I solved it. I used and will suggest, layer by layer debugging. I was miscalculating some dimensions and hence the kernel sizes that I was using were leading up to this error. You might need to modify the kernel sizes and strides. Here is how I went about it.
I had N samples and each sample had the shape (7, 7, 2) where 2 is the number of channels. In your case, it would be (2, 7, 7). I commented out the rest of the model except the first layer, and printed the model summary which you can see as print(rgb_model.summary()). When you do this and repeat similar debugging by uncommenting one layer each time, you will be able to see the output size in the summary. It will help you with comparing your theoretical architecture with the actual code.
I also realized that I was using the wrong arguments in MaxPool2D(). I was confusing between the pool_size and the strides.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

rantsandruse picture rantsandruse  路  3Comments

farizrahman4u picture farizrahman4u  路  3Comments

harishkrishnav picture harishkrishnav  路  3Comments

zygmuntz picture zygmuntz  路  3Comments

braingineer picture braingineer  路  3Comments