Keras: Embedding doesnt work for float 16

Created on 25 Jun 2018  路  5Comments  路  Source: keras-team/keras

Hi

While running the imdb_lstm.py from keras examples
(code can be found here)
https://gist.github.com/raghavgurbaxani/20c08c55eca5e97cd5c51389c091fc9f

I get the error
tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[14,25] = 20000 is not in [0, 20000)
[[Node: embedding_1/GatherV2 = GatherV2[Taxis=DT_INT32, Tindices=DT_INT32, Tparams=DT_HALF, _class=["loc:@training/Adam/gradients/embedding_1/GatherV2_grad/Reshape"], _device="/job:localhost/replica:0/task:0/device:CPU:0"](embedding_1/embeddings/read, embedding_1/Cast, lstm_1/TensorArrayUnstack/range/start)]]

Caused by op u'embedding_1/GatherV2', defined at:

Does tf.gather not support float 16 ?

(Using tensorflow 1.8.0, keras 2.2.0 on the Titan X)

Thanks :)

Most helpful comment

It's still getting a problem here.
I tested in both CPU and GPU.

CPU(MacOS 10.14): The same InvalidArgumentError problem.
tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[1920,59] = -2147483648 is not in [0, 1504091) [[{{node embedding_1/embedding_lookup}}]]

GPU(Windows 10 1903): The loss goes NaN while training official IMDB model in float16 mode.
image

best regards : )

All 5 comments

tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[14,25] = 20000 is not in [0, 20000)
Check that input size of your embedding layer is correct. You might have an off by 1 error. Or does this work using float32?

Hi

It works for float 32 and the code is taken from keras examples on IMDb dataset

Ah, sorry missed that it is the Keras example. I will test it out as soon as I gets my hands on V100 but it is interesting that Embedding layer gets pinned to CPU - not sure if that could cause errors with float16

It's still getting a problem here.
I tested in both CPU and GPU.

CPU(MacOS 10.14): The same InvalidArgumentError problem.
tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[1920,59] = -2147483648 is not in [0, 1504091) [[{{node embedding_1/embedding_lookup}}]]

GPU(Windows 10 1903): The loss goes NaN while training official IMDB model in float16 mode.
image

best regards : )

Facing the same issue. My model works for float32 but I'll get an error on float16. Tried using K.set_epsilon(1e-4) with no success. Any updates on this?

Was this page helpful?
0 / 5 - 0 ratings