The MNIST example seems to have problem in running with Tensorflow:
Hi, I am completely new in TensorFlow. I just built TensorFlow and tried to run models/tutorials/image/imagenet/classify_image.py and it ran. But when I tried MNIST, I found the following error:
abhishek@phoebusdev:~/Documents/Works/models/tutorials/image/mnist$ python convolutional.py --self-test
Extracting data/train-images-idx3-ubyte.gz
Extracting data/train-labels-idx1-ubyte.gz
Extracting data/t10k-images-idx3-ubyte.gz
Extracting data/t10k-labels-idx1-ubyte.gz
Traceback (most recent call last):
File "convolutional.py", line 339, in
tf.app.run(main=main, argv=[sys.argv[0]] + unparsed)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 44, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "convolutional.py", line 231, in main
logits, train_labels_node))
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/nn_ops.py", line 1685, in sparse_softmax_cross_entropy_with_logits
labels, logits)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/nn_ops.py", line 1534, in _ensure_xent_args
"named arguments (labels=..., logits=..., ...)" % name)
ValueError: Only call sparse_softmax_cross_entropy_with_logits
with named arguments (labels=..., logits=..., ...)
Am I doing anything wrong?
Is this related to your recent changes @martinwicke
Exactly same issue...
getting same problem
face the same situation, may anyone help to figure out which commit should I revert?
Sorry about that. I think I fixed it, please try it.
I'm getting the same problem today (using up-to-date source I pulled and compiled tonight).
I am seeing the same error message when running python /usr/local/lib/python2.7/dist-packages/tensorflow/models/image/mnist/convolutional.py
, but mnist is running normally if I run code under this repo python models/tutorials/image/mnist/convolution.py
. And I found that the reason is you guys have changed interface under this repo but not the one under tensorflow/tensorflow?:
/usr/local/lib/python2.7/dist-packages/tensorflow/models/image/mnist/convolutional.py
loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(logits, train_labels_node))
this repo:
loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(labels=train_labels_node, logits=logits))
We're preparing this repo to be compatible to TensorFlow at head, which
will become TensorFlow 1.0. At that point they should be back in sync. This
will cause some disruption in the meantime. I apologize. You should be able
to use the nightly builds.
getting same problem
i meet the same issue on TF 1.0, when training Faster RCNN
I have also hit this error while training corgi's Faster RCNN.
Error log is listed below.
Traceback (most recent call last):
File "./tools/train_net.py", line 96, in
max_iters=args.max_iters)
File "/home/scott/code/Faster-RCNN_TF/tools/../lib/fast_rcnn/train.py", line 222, in train_net
sw.train_model(sess, max_iters)
File "/home/scott/code/Faster-RCNN_TF/tools/../lib/fast_rcnn/train.py", line 95, in train_model
rpn_cross_entropy = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(rpn_cls_score, rpn_label))
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/nn_ops.py", line 1684, in sparse_softmax_cross_entropy_with_logits
labels, logits)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/nn_ops.py", line 1533, in _ensure_xent_args
"named arguments (labels=..., logits=..., ...)" % name)
ValueError: Only call sparse_softmax_cross_entropy_with_logits
with named arguments (labels=..., logits=..., ...)
For those meet the same issue in TensorFlow r1.0, please reference this fix to change the code.
https://github.com/tensorflow/models/pull/864/commits/e93ec37201f5f2116933ae96e505f409ddbf344d
@rivendell1984 Thanks! I solved my problem of training faster RCNN with TF 1.0 following your suggestion.
cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(
logits=logits, labels=labels, name='xentropy')
just write logits=logits and labels=labels and it works
loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(logits=output, labels=tf.cast(tf.reshape(X, [-1]), dtype=tf.int32)))
Most helpful comment
cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(
logits=logits, labels=labels, name='xentropy')
just write logits=logits and labels=labels and it works