I am trying to run the specific code on a pc with a Titan X gpu:
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation, Flatten
from keras.layers.convolutional import Convolution2D, MaxPooling2D, MaxPooling1D, Convolution1D
from keras.optimizers import SGD
import numpy as np
X = [
[
[
[1,2,3],
[1,2,3],
[1,2,3],
[1,2,3]
]
]
]
Y = [
[1,2,3,4,5]
]
X = np.array(X)
Y = np.array(Y)
Conv_size = 10
Dense_size = 10
filters = 5
rows= 2
cols= X.shape[-1]
model = Sequential()
model.add(Convolution2D(filters, rows, cols, activation='sigmoid' , input_shape = X.shape[1:]))
model.add(MaxPooling2D(pool_size=(rows, cols)))
model.add(Flatten())
model.add(Dense(Dense_size, activation='sigmoid'))
model.add(Dropout(0.5))
model.add(Dense(5, activation='linear'))
model.compile(loss='rmse', optimizer='sgd')
model.fit(X, Y, nb_epoch=50)
and it yields the following error
>>> model.add(Dense(Dense_size, activation='sigmoid'))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/dist-packages/keras/layers/containers.py", line 68, in add
self.layers[-1].set_previous(self.layers[-2])
File "/usr/local/lib/python2.7/dist-packages/keras/layers/core.py", line 82, in set_previous
assert self.input_ndim == len(layer.output_shape), ('Incompatible shapes: layer expected input with ndim=' +
File "/usr/local/lib/python2.7/dist-packages/keras/layers/core.py", line 814, in output_shape
'(got ' + str(input_shape[1:]) + '. '
Exception: The shape of the input to "Flatten" is not fully defined (got (5, 1, 0). Make sure to pass a complete "input_shape" or "batch_input_shape" argument to the first layer in your model.
also the versions are
pip show theano | grep Version
Metadata-Version: 1.1
Version: 0.8.0.dev0
pip show keras | grep Version
Metadata-Version: 1.1
Version: 0.3.1
When i run the same code on another pc it passes!
>>> model = Sequential()
>>> model.add(Convolution2D(filters, rows, cols, activation='sigmoid' , input_shape = X.shape[1:]))
>>> model.add(MaxPooling2D(pool_size=(rows, cols)))
>>> model.add(Flatten())
>>> model.add(Dense(Dense_size, activation='sigmoid'))
>>> model.add(Dropout(0.5))
>>> model.add(Dense(5, activation='linear'))
>>> model.compile(loss='rmse', optimizer='sgd')
>>> model.fit(X, Y, nb_epoch=5)
Epoch 1/5
1/1 [==============================] - 0s - loss: 3.4482
Epoch 2/5
1/1 [==============================] - 0s - loss: 3.7623
Epoch 3/5
1/1 [==============================] - 0s - loss: 3.7835
Epoch 4/5
1/1 [==============================] - 0s - loss: 3.8206
Epoch 5/5
1/1 [==============================] - 0s - loss: 4.4849
<keras.callbacks.History object at 0x5751610>
It is a machine without gpu and versions
pip show theano| grep Version
Metadata-Version: 1.1
Version: 0.7.0
pip show keras| grep Version
Metadata-Version: 2.0
Version: 0.3.0
What could i do in order to use such a good gpu ?
Should i fall back to keras 3.0 or is it a theano problem ?
Thank you in advance
You seem confused. The error message is clear enough:
Exception: The shape of the input to "Flatten" is not fully defined (got (5, 1, 0). Make sure to pass a complete "input_shape" or "batch_input_shape" argument to the first layer in your model.
So your Flatten layer is getting an input of shape (5, 1, 0), i.e. with 0 filters, which is not a valid input shape. Hence an error. You are pooling 0 columns of your previous input.
Hello Mr. Chollet
I understand the error printed.
The problem is that it should not be there.
If you observe the code the Flatten layer should not get this input.
Also i have posted the output of two different machines.
The one with the gpu yields an error.
The second without the gpu runs smoothly the same code and presents an output.
If you haven't solved it by now, its because you're pooling layer has completely shrunk the input. remove that and you should be fine
Have you find the solution?
I have the same problem. My code was running on my computer without GPU, but after I added a GPU, I got the same error.
This problem occurred when i used flatten after a pooling layer.
I had used pool_size in a wrong way.
I found out the solution carefully reading the documentation,
as well as using
model.layers[-1].get_weights()[0].shape
which was extremely helpful in order to place the appropriate values in pool_size.
Also try using dummy data created with numpy (or a really small subcorpus) just to make it compile.
Then use the full data set in hand.
Finally the problem was not that it yielded the error in the first computer.
The problem was that it did not yield the error on the second one when in fact it should do just that.
I hope that helps.
Thank you for your reply.
My problem was that I had forgotten to change image_dim_ordering to TH when I changed the backend to theano.
@hadikazemi AHA! there is my answer. Thanks so much!
Most helpful comment
Thank you for your reply.
My problem was that I had forgotten to change image_dim_ordering to TH when I changed the backend to theano.