Keras: Expected 3 dimensions but got array with shape (11, 2)

Created on 21 Apr 2017  Â·  38Comments  Â·  Source: keras-team/keras

I'm training a model like so:

model = Sequential()
model.add(LSTM(24, input_shape=(1200, 19), return_sequences=True, implementation=2))
model.add(TimeDistributed(Dense(1)))
model.add(AveragePooling1D())
model.add(Dense(2, activation='softmax'))
model.compile(loss=categorical_crossentropy, optimizer=RMSprop(lr=.01))
model.fit(train_x, train_y, epochs=100, batch_size=6000, verbose=1, validation_data=(test_x, test_y))

When I run this on a very small dummy data set (while I'm working on getting it working), I get the following error:

ValueError: Error when checking model target: expected dense_2 to have 3 dimensions, but got array with shape (11, 2)

However, if I print the shape of train_y, it's (11, 2), which is exactly the shape of the model output that Keras/Tensorflow is complaining about.

I'm at a loss as to why the model is expecting a 3 dimensional output when train_y is (11, 2)?

Please make sure that the boxes below are checked before you submit your issue. If your issue is an implementation question, please ask your question on StackOverflow or join the Keras Slack channel and ask there instead of filing a GitHub issue.

Thank you!

  • [x] Check that you are up-to-date with the master branch of Keras. You can update with:
    pip install git+git://github.com/fchollet/keras.git --upgrade --no-deps

  • [x] If running on TensorFlow, check that you are up-to-date with the latest version. The installation instructions can be found here.

  • [ ] If running on Theano, check that you are up-to-date with the master branch of Theano. You can update with:
    pip install git+git://github.com/Theano/Theano.git --upgrade --no-deps

  • [x] Provide a link to a GitHub Gist of a Python script that can reproduce your issue (or just copy the script here if it is short).

Most helpful comment

The problem is that you start with a three dimensional layer but never reduce the dimensionality in any of the following layers.
Try adding mode.add(Flatten()) before the last Dense layer

model = Sequential()
model.add(LSTM(24, input_shape=(1200, 19), return_sequences=True, implementation=2))
model.add(TimeDistributed(Dense(1)))
model.add(AveragePooling1D())

model.add(Flatten())

model.add(Dense(2, activation='softmax'))
model.compile(loss=categorical_crossentropy, optimizer=RMSprop(lr=.01))
model.fit(train_x, train_y, epochs=100, batch_size=6000, verbose=1, validation_data=(test_x, test_y))

All 38 comments

@daynebatten Can you share the [dummy] dataset that you are using ?

Here you go...

Data Set

import psycopg2
import numpy as np
from sklearn.preprocessing import minmax_scale

results = np.loadtxt('input.csv', delimiter=',')
results[:, 0:3] = minmax_scale(results[:, 0:3], axis=0)

num_series = 0

for result in results:
    if result[20] == 1:
        num_series += 1

y = np.empty((num_series, 2))
x = np.empty((num_series, 1200, 19))

i = 0

for result in results:
    if result[20] == 1:
        if i > 0:
            y[i] = this_y
            x[i] = this_x

        this_y = np.empty(2)

        if result[19] == 5:
            this_y[0] = 1
        else:
            this_y[0] = 0

        this_y[1] = 1 - this_y[0]

        this_x = np.zeros((1, 1200, 19))
        i += 1

    this_x[0, 1200 - int(result[20]), :] = result[0:19]

length = y.shape[0]
cutoff = int(length * .75)

train_x = x[0:cutoff, :, :]
train_y = y[0:cutoff]
test_x = x[cutoff:length, :, :]
test_y = y[cutoff:length]

The problem is that you start with a three dimensional layer but never reduce the dimensionality in any of the following layers.
Try adding mode.add(Flatten()) before the last Dense layer

model = Sequential()
model.add(LSTM(24, input_shape=(1200, 19), return_sequences=True, implementation=2))
model.add(TimeDistributed(Dense(1)))
model.add(AveragePooling1D())

model.add(Flatten())

model.add(Dense(2, activation='softmax'))
model.compile(loss=categorical_crossentropy, optimizer=RMSprop(lr=.01))
model.fit(train_x, train_y, epochs=100, batch_size=6000, verbose=1, validation_data=(test_x, test_y))

I had a similar issue, which was indeed solved by adding a Flatten layer before the first Dense layer. However the docs might be misleading in this case, because documentation for the Dense layer claims that this dimensionality should be implicitly reduced:

Note: if the input to the layer has a rank greater than 2, then it is flattened prior to the initial dot product with kernel.

Or am I misreading that somehow?

Like @fuine, adding a Flatten layer solved my problem with this. But I'm additionally confused because (a) the necessity for a Flatten layer isn't reflected in at least some of the examples (e.g., IMDb sentiment and text generation) and, (b) even more confusingly, actually works in at least the text generation example (I haven't run the IMDb example). So ATM it seems like flattening the outputs of 3D layers before feeding them into 2D layers is sometimes necessary and sometimes not?

Any solution for this so far?

Same here. I was training the cats vs. dogs datasets with only Dense layers. I got to the same issue. I added the Flatten and it worked. I'd love to hear why this seemed to fixed it.

Was facing the same issue Flatten worked for me . I have a doubt though. My model is as follows .

model.add(LSTM(32, batch_input_shape=(batch_size, look_back, 2), stateful=True, return_sequences=True))
model.add(Dropout(0.3))

for i in range(2):
model.add(LSTM(32,return_sequences=True , stateful=True))
model.add(Dropout(0.3))
model.add(Flatten())

model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')

model.summary()

Could it not detect and change the input shape to dense on its own ?

Same issue, it is annoying that these hidden dimension conventions are not clear, see #8527

I am having the same problem with an encoder-decoder seq2seq model for Machine Translation. I use embedding-layers for the input to both the encoder and decoder, and I want to use one-hot encoded target-output of the decoder, but only feed it integer-tokens to save memory for storing the data-set. I don't want the hassle of making my own data-generator-function which converts from integer-tokens to one-hot arrays.

Looking at the implementation of sparse_categorical_crossentropy() in Keras there is actually some reshaping going on there, but the doc-string doesn't make clear what is assumed of the input/output dims and when/how reshaping is supposed to be done, so it's impossible to know whether it is a bug or a feature we are experiencing, and how to deal with it properly.

The doc-string needs to be made more clear by someone who understands the intention of this code.

Furthermore, the doc-string needs to be "exported" somehow to the online docs because it is not shown here: https://keras.io/losses/#sparse_categorical_crossentropy

Who are we going to call to get this fixed? Ghostbusters?

Same issue!

I have opened a more detailed report which also provides a work-around for TensorFlow. See https://github.com/tensorflow/tensorflow/issues/17150

Am getting this error,
Error when checking input: expected lstm_40_input to have 3 dimensions, but got array with shape (1191, 26)

model = Sequential()
model.add(LSTM(200, return_sequences=True, input_shape=(1191,26)))
model.add(LSTM(200))
model.add(Dense(1000))
model.summary()

model.fit(input, target, nb_epoch=10, batch_size=32)

input and target is of size (1191, 26).
Can anyone help me with this ?

Did you find a solution?

I got the same issue.
However, reading the https://keras.io/getting-started/sequential-model-guide/#specifying-the-input-shape I found this:

If you ever need to specify a fixed batch size for your inputs (this is useful for stateful recurrent networks), you can pass a batch_size argument to a layer. If you pass both batch_size=32 and input_shape=(6, 8) to a layer, it will then expect every batch of inputs to have the batch shape (32, 6, 8)

Yet, not sure it's related to this issue.

In keras/engine/input_layer.py line : 91

batch_input_shape = (batch_size,) + tuple(input_shape)
So, keras computes a new input shape for batch prcessing to work.
In my case, I was training a Dense Network on MNIST
So, When my input shape was (728,1), keras changed it to (batch_size, 728, 1) and naturally feeding a (784, 60000) on this will throw this error of expected 3 dimensions got 2 dimensions.
To solve this issue, instead of Flatten(). I changed the input size to (784,) and it worked.

@coder3101: Did you mean you changed the input size to (728, )? And this is specified on the final output Dense layer?

SAME ERROR

CODE:

import os

import numpy as np

import binvox_rw

ROOT = 'ModelNet10'
CLASSES = ['bathtub', 'bed', 'chair', 'desk', 'dresser',
'monitor', 'night_stand', 'sofa', 'table', 'toilet']

We'll put the data into these arrays

X = {'train': [], 'test': []}
y = {'train': [], 'test': []}

Iterate over the classes and train/test directories

for label, cl in enumerate(CLASSES):
for split in ['train', 'test']:
examples_dir = os.path.join('.', ROOT, cl, split)
for example in os.listdir(examples_dir):
if 'binvox' in example: # Ignore OFF files
with open(os.path.join(examples_dir, example), 'rb') as file:
data = np.int32(binvox_rw.read_as_3d_array(file).data)
padded_data = np.pad(data, 3, 'constant')
X[split].append(padded_data)
y[split].append(label)

Save to a NumPy archive called "modelnet10.npz"

np.savez_compressed('modelnet5.npz',
X_train=X['train'],
X_test=X['test'],
y_train=y['train'],
y_test=y['test'])
import numpy as np

from sklearn.utils import shuffle

data = np.load('modelnet10.npz')
X, Y = shuffle(data['X_train'], data['y_train'])
X_test, Y_test = shuffle(data['X_test'], data['y_test'])

import keras

Y = keras.utils.to_categorical(Y, 10)

from keras.models import Sequential
from keras.layers import Dense, Flatten, Reshape
from keras.layers.convolutional import Conv3D

model = Sequential()
model.add(Reshape((30, 30, 30, 1), input_shape=(30, 30, 30)))
model.add(Conv3D(16, 6, strides=2, activation='relu'))
model.add(Conv3D(64, 5, strides=2, activation='relu'))
model.add(Conv3D(64, 5, strides=2, activation='relu'))
model.add(Flatten())
model.add(Dense(10, activation='softmax'))

model.compile(loss='categorical_crossentropy',
optimizer=keras.optimizers.Adam(lr=0.001),
metrics=['accuracy'])
hist=model.fit(X, Y,batch_size=256, epochs=30,validation_split=0.2,verbose=2,shuffle=True)

ERROR:

ValueError Traceback (most recent call last)
in ()
2 optimizer=keras.optimizers.Adam(lr=0.001),
3 metrics=['accuracy'])
----> 4 model.fit(X, Y,validation_split=0.2,epochs=30, batch_size=32)

~\Anaconda3\lib\site-packages\keras\engine\training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, **kwargs)
953 x, y,
954 sample_weight=sample_weight,
--> 955 class_weight=class_weight,
956 batch_size=batch_size)
957 # Prepare validation data.

~\Anaconda3\lib\site-packages\keras\engine\training.py in _standardize_user_data(self, x, y, sample_weight, class_weight, check_array_lengths, batch_size)
752 feed_input_shapes,
753 check_batch_axis=False, # Don't enforce the batch size.
--> 754 exception_prefix='input')
755
756 if y is not None:

~\Anaconda3\lib\site-packages\keras\engine\training_utils.py in standardize_input_data(data, names, shapes, check_batch_axis, exception_prefix)
124 ': expected ' + names[i] + ' to have ' +
125 str(len(shape)) + ' dimensions, but got array '
--> 126 'with shape ' + str(data_shape))
127 if not check_batch_axis:
128 data_shape = data_shape[1:]

ValueError: Error when checking input: expected reshape_4_input to have 4 dimensions, but got array with shape (0, 1)

Similar issue; apparently this has been going on for a almost two years...

Same issue. By applying @karimpedia 's solution, still can not make it working.
The following is my code:
model = Sequential()
model.add(LSTM(32, return_sequences=True, input_shape=(x_nn.shape[0], x_nn.shape[1]), implementation=2))
model.add(Dropout(0.2))
model.add(LSTM(32, return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(32, return_sequences=True))
model.add(Dropout(0.2))
model.add(Flatten())
model.add(Dense(units=2, activation='softmax'))
model.compile(optimizer='adam', loss='mean_squared_error')

Anyone has another solution or more idea?

Is there any way to use Flatten() with varying sequences? I get the same error as everyone but when I added a Flatten layer i get

ValueError: The shape of the input to "Flatten" is not fully defined (got (None, 100). Make sure to pass a complete "input_shape" or "batch_input_shape" argument to the first layer in your model.

Is there anyway to solve this?

I think the issue is caused by having return_sequences=True in the LSTM() layer. This means that we get a sequence of hidden-state-vector the size of n_neurons, for each time_step. After you've built your model, e.g.

model = Sequential()
model.add(LSTM(n_neurons, return_sequences=True, input_shape=(n_time_steps, n_features))
model.add(Flatten())
model.add(Dense(1))
model.compile(optimizer='adam', loss='mse')

You can print model.summary().

Note the output_shapes:

  • after the LSTM() layer, it's (None, 1, n_neurons);
  • after the Flatten() layer, it's (None, n_neurons);
  • If return_sequences=False, after the LSTM() layer, it's (None, n_neurons) - essentially the same as flattening.

have received multiple diffrent ValueErrors trying to solve this changed many parameters.
It is a time series problemI have data from 60 shops from 215 items in 1034 days. I have splitted 973 days for train and 61 for test:

train_x = train_x.reshape((60, 973, 215))
test_x = test_x.reshape((60, 61, 215))
train_y = train_y.reshape((60, 973, 215))
test_y = test_y.reshape((60, 61, 215))

My model:

model = Sequential()
model.add(LSTM(100, input_shape=(train_x.shape[1], train_x.shape[2]), return_sequences='true'))
model.add(Dense(215))
model.compile(loss='mean_squared_error', optimizer='adam', metrics=['accuracy'])
history = model.fit(train_x, train_y, epochs=10,
                    validation_data=(test_x, test_y), verbose=2, shuffle=False)

ValueError: Error when checking input: expected lstm_1_input to have shape (973, 215) but got array with shape (61, 215)

I'm pretty sure I didn't change anything else, but I just refreshed my GPU and restarted my notebook. You can refresh the GPU without turning everything off and on with WIN+SHIFT+CTRL+B. All of a sudden it started working.

Hello, i have the same Error "ValueError: Error when checking input: expected dropout_input to have 2 dimensions, but got array with shape ()" I'm trying to predict with a pre-trained model.

text = ["This is the Worst movie in the planet" , "Is the best movie in all the world"]
palabraVectorizada = vectorizer.transform(text)

adivina = new_model.predict(np.array(palabraVectorizada))

and in this line occurs the error :
"ValueError: Error when checking input: expected dropout_input to have 2 dimensions, but got array with shape ()"

Flatte works because it "flattens" the 3D (batch, timestep,features) used in LSTM and GRU implementation to a 2D (batch, features) that a dense net needs... easy ain't it?

I have solve this error training my new data to predict with the same vocabulary as i have Vectorize my train_texts with his labels , i save the vocabulary in a Json, you can extract the vocabulary in your vectorizer with the function vocabulary_ and save to a variable, and then iterate over the list and convert the numpy integers to int to dump to a json file.

When you want to vectorize a string you can use again the Tfidvectorizer and pass the arguments with the vocabulary on the json, and that fits the arrays shapes.

please someone help me how to fix this error which happen in my model


model = Sequential()
model.add(LSTM(100, input_shape= (1, 2048), return_sequences=True))
model.add(TimeDistributed(Dense(50)))
model.add(GlobalAveragePooling1D())
model.add(Dense(2048, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam',
metrics=[sensitivity, specificity, precision,recall,'accuracy'])
print(model.summary())

history = model.fit(x_train_all, y_train_all, validation_data=(x_test_all, y_test_all),batch_size=50, epochs=20)


Layer (type) Output Shape Param #

lstm_10 (LSTM) (None, 1, 100) 859600


time_distributed_10 (TimeDis (None, 1, 50) 5050


global_average_pooling1d_2 ( (None, 50) 0


dense_18 (Dense) (None, 2048) 104448

Total params: 969,098
Trainable params: 969,098
Non-trainable params: 0


None

ValueError Traceback (most recent call last)
in
19 print(model.summary())
20 #history = model.fit(x_train, y_train, validation_data=(x_test, y_test), batch_size=50, epochs=20)
---> 21 history = model.fit(x_train_all, y_train_all, validation_data=(x_test_all, y_test_all),batch_size=50, epochs=20)

/anaconda3/lib/python3.6/site-packages/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, **kwargs)
950 sample_weight=sample_weight,
951 class_weight=class_weight,
--> 952 batch_size=batch_size)
953 # Prepare validation data.
954 do_validation = False

/anaconda3/lib/python3.6/site-packages/keras/engine/training.py in _standardize_user_data(self, x, y, sample_weight, class_weight, check_array_lengths, batch_size)
787 feed_output_shapes,
788 check_batch_axis=False, # Don't enforce the batch size.
--> 789 exception_prefix='target')
790
791 # Generate sample-wise weight values given the sample_weight and

/anaconda3/lib/python3.6/site-packages/keras/engine/training_utils.py in standardize_input_data(data, names, shapes, check_batch_axis, exception_prefix)
126 ': expected ' + names[i] + ' to have ' +
127 str(len(shape)) + ' dimensions, but got array '
--> 128 'with shape ' + str(data_shape))
129 if not check_batch_axis:
130 data_shape = data_shape[1:]

ValueError: Error when checking target: expected dense_18 to have 2 dimensions, but got array with shape (110, 2048, 2)

Solution: Hey everyone,
I have been punched with the same error but after so many searches this worked for me
the input ship was 830 rows and 8 columns,
X = np.asarry(X) -------> (830,8)
X = np.reshape(X,(830,(1,8)) ====> means I have 830 row of 1*8
then put into LSTM and job have done let me know if you solved the problem this way

Hi dear
Thank you for that yes it did work well
Regards

Sent from my iPhone

On Apr 6, 2019, at 4:51 PM, HabeebullahEbrahemi notifications@github.com wrote:

Solution: Hey everyone,
I have been punched with the same error but after so many searches this worked for me
the input ship was 830 rows and 8 columns,
X = np.asarry(X) -------> (830,8)
X = np.reshape(X,(830,(1,8)) ====> means I have 830 row of 1*8
then put into LSTM and job have done let me know if you solved the problem this way

—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.

I'm still getting the following error after using Flatten()

ValueError: Shapes (?, 12) and (?, ?, ?, ?) must have the same rank
ValueError: Shapes (?, 12) and (?, ?, ?, ?) are not compatible
ValueError: input tensor must have rank 4

The model that I've used is:

def CNNModel(input_shape, epochs, num_classes, batch_size):
    model = Sequential()
    model.add(TimeDistributed(Conv1D(64, kernel_size=9,
                     activation='relu',
                     padding='same',
                     input_shape=input_shape)))
    model.add(TimeDistributed(MaxPooling1D(pool_size=3, strides=1)))
    model.add(TimeDistributed(Conv1D(128, 7, activation='relu')))
    model.add(TimeDistributed(MaxPooling1D(pool_size=3)))
    model.add(TimeDistributed(Conv1D(256, 5, activation='relu')))
    model.add(TimeDistributed(MaxPooling1D(pool_size=3)))
    model.add(Dropout(0.5))
    model.add(CuDNNLSTM(50))
    model.add(Dropout(0.25))`
    model.add(TimeDistributed(Dense(64, activation='relu')))
    model.add(Dropout(0.5))
    model.add(Flatten())
    model.add(Dense(num_classes, activation='softmax'))
    return model

Input shape: (9446, 150, 12)
Output shape: (9446, 2)
Can someone help me out with this?

For me, it seems that the problem is caused by Tensorflow adding a None for the to the input_shape tuple - I'm thinking as a batch_input_shape value - but I can't find where it's set in the code. What I do know is that this is how my first layer is defined:

flatten_layer = tf.keras.layers.Flatten(input_shape=(28, 28))
model = tf.keras.models.Sequential([
  flatten_layer,
  tf.keras.layers.Dense(128, activation='relu'),
  tf.keras.layers.Dropout(0.2),
  tf.keras.layers.Dense(10, activation='softmax')
])

flatten_layer.input_shape produces (None, 28, 28) and len(flatten_layer.input_shape) returns 3, not 2 per what I set.

This ultimately leads to a length of input_shape being 3 instead of 2, which triggers ValueError: Error when checking input: expected flatten_7_input to have 3 dimensions, but got array with shape (28, 28) from the following code block in tensorflow/tensorflow/python/keras/engine/training_utils.py:

        if len(data_shape) != len(shape):
          raise ValueError('Error when checking ' + exception_prefix +
                           ': expected ' + names[i] + ' to have ' +
                           str(len(shape)) + ' dimensions, but got array '
                           'with shape ' + str(data_shape))

To get around this, for the time being, I'm reshaping the input like this:
reshaped_input_image = input_image_28x28.reshape(1, 28, 28)

I think this essentially creates a batch of 1 containing my 28x28 image. At least that's what I think :)

omg, I also fixed my error of shape by adding a Flatten layer, while, as shown before, the documentation note that it should be done automatically

https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/layers/Dense

Note: If the input to the layer has a rank greater than 2, then it is flattened prior to the initial dot product with kernel.

I was getting a similar error while giving one-hot encoded vectors as inputs to a Conv1D layer. What I did was add an extra dimension to the input vectors by the following code.
new_vec = np.expand_dims(old_vec,2)
My new_vec has a dimension of 40000,128,1 now and I just pass this to the input layer
Layer_1_1 = Input(shape = (128,1),name = 'input_layer')

You can try if this works in your case, do note that I got this similar error while passing one-hot encoded values to a Conv1D layer.

classifier = Sequential()
classifier.add(LSTM(512, activation='relu', return_sequences=True,input_shape=(X_train.shape[0], X_train.shape[1])))
classifier.add(LSTM(256, activation='relu', return_sequences=True))
classifier.add(LSTM(256, activation='relu', return_sequences=True))
classifier.add(LSTM(128, activation='relu', return_sequences=True))
classifier.add(LSTM(128, activation='relu', return_sequences=True))
classifier.add(LSTM(128, activation='relu', return_sequences=True))
classifier.add(LSTM(128, activation='relu'))
classifier.add(Flatten())

Output Layer

classifier.add(Dense(2, activation='softmax'))

This is throwing the same error as below. Any help is appreciated. Thank you.

ValueError Traceback (most recent call last)
in
9 # np.unique(Y_train),
10 # Y_train)
---> 11 classifier.fit(X_train, Y_train, class_weight=class_weights, verbose=1)

/usr/local/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_freq, max_queue_size, workers, use_multiprocessing, **kwargs)
707 steps=steps_per_epoch,
708 validation_split=validation_split,
--> 709 shuffle=shuffle)
710
711 # Prepare validation data.

/usr/local/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py in _standardize_user_data(self, x, y, sample_weight, class_weight, batch_size, check_steps, steps_name, steps, validation_split, shuffle, extract_tensors_from_dataset)
2649 feed_input_shapes,
2650 check_batch_axis=False, # Don't enforce the batch size.
-> 2651 exception_prefix='input')
2652
2653 if y is not None:

/usr/local/lib/python3.7/site-packages/tensorflow/python/keras/engine/training_utils.py in standardize_input_data(data, names, shapes, check_batch_axis, exception_prefix)
374 ': expected ' + names[i] + ' to have ' +
375 str(len(shape)) + ' dimensions, but got array '
--> 376 'with shape ' + str(data_shape))
377 if not check_batch_axis:
378 data_shape = data_shape[1:]

ValueError: Error when checking input: expected lstm_10_input to have 3 dimensions, but got array with shape (413378, 244)

Namastey,
Is this solved?

Same issue here with TimeseriesGenerator, here is my code:
n_input=10 n_features=1 train_sequences=TimeseriesGenerator(train[target_col],train[target_col],length=n_input,batch_size=1)

model = Sequential() model.add(LSTM(100, activation='relu', return_sequences=True,input_shape=(n_input, n_features))) model.add(Flatten()) model.add(Dense(1)) model.compile(optimizer='adam', loss='mse') model.summary()

model.fit_generator(train_sequence,epochs=10)

_ValueError: Error when checking input: expected lstm_13_input to have 3 dimensions, but got array with shape (1, 10)_

Was this page helpful?
0 / 5 - 0 ratings

Related issues

harishkrishnav picture harishkrishnav  Â·  3Comments

kylemcdonald picture kylemcdonald  Â·  3Comments

farizrahman4u picture farizrahman4u  Â·  3Comments

braingineer picture braingineer  Â·  3Comments

snakeztc picture snakeztc  Â·  3Comments