Models: ValueError: The first layer in a Sequential model must get an `input_shape` or `batch_input_shape` argument.

Created on 28 Sep 2018  路  2Comments  路  Source: tensorflow/models

import tensorflow as tf

mnist = tf.keras.datasets.mnist # 28x28 images of handwritten digits 0-9

(x_train, y_train),(x_test, y_test) = mnist.load_data()

x_train = tf.keras.utils.normalize(x_train, axis=1)
x_test = tf.keras.utils.normalize(x_test, axis=1)

model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(128, activation=tf.nn.relu))
model.add(tf.keras.layers.Dense(128, activation=tf.nn.relu))
model.add(tf.keras.layers.Dense(10, activation=tf.nn.softmax))

model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])

model.fit(x_train, y_train, epochs=3)

ValueError Traceback (most recent call last)
in ()
9
10 model = tf.keras.models.Sequential()
---> 11 model.add(tf.keras.layers.Flatten())
12 model.add(tf.keras.layers.Dense(128, activation=tf.nn.relu))
13 model.add(tf.keras.layers.Dense(128, activation=tf.nn.relu))

E:\Anaconda3\lib\site-packages\tensorflow\python\keras_impl\keras\models.py in add(self, layer)
459 # create an input layer
460 if not hasattr(layer, '_batch_input_shape'):
--> 461 raise ValueError('The first layer in a '
462 'Sequential model must '
463 'get an input_shape or '

ValueError: The first layer in a Sequential model must get an input_shape or batch_input_shape argument.

Please go to Stack Overflow for help and support:

http://stackoverflow.com/questions/tagged/tensorflow

Also, please understand that many of the models included in this repository are experimental and research-style code. If you open a GitHub issue, here is our policy:

  1. It must be a bug, a feature request, or a significant problem with documentation (for small docs fixes please send a PR instead).
  2. The form below must be filled out.

Here's why we have that policy: TensorFlow developers respond to issues. We want to focus on work that benefits the whole community, e.g., fixing bugs and adding features. Support only helps individuals. GitHub also notifies thousands of people when issues are filed. We want them to see you communicating an interesting problem, rather than being redirected to Stack Overflow.


System information

  • What is the top-level directory of the model you are using:
  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow):
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
  • TensorFlow installed from (source or binary):
  • TensorFlow version (use command below):
  • Bazel version (if compiling from source):
  • CUDA/cuDNN version:
  • GPU model and memory:
  • Exact command to reproduce:

You can collect some of this information using our environment capture script:

https://github.com/tensorflow/tensorflow/tree/master/tools/tf_env_collect.sh

You can obtain the TensorFlow version with

python -c "import tensorflow as tf; print(tf.GIT_VERSION, tf.VERSION)"

Describe the problem

Describe the problem clearly here. Be sure to convey here why it's a bug in TensorFlow or a feature request.

Source code / logs

Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached. Try to provide a reproducible test case that is the bare minimum necessary to generate the problem.

All 2 comments

You should use flatten() at the end of your layers

model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Dense(128, activation=tf.nn.relu))
model.add(tf.keras.layers.Dense(128, activation=tf.nn.relu))
model.add(tf.keras.layers.Dense(10, activation=tf.nn.softmax))

model.add(tf.keras.layers.Flatten())

Was this page helpful?
0 / 5 - 0 ratings