Models: [Transformer] InvalidArgumentError at the beginning of training a transformer model on CPU

Created on 26 Jul 2018  路  15Comments  路  Source: tensorflow/models

It throws the following InvalidArgumentError at the beginning of training a Transformer model (models/official/transformer/transformer_main.py) using CPUs.
The training steps is on:
https://github.com/tensorflow/models/tree/master/official/transformer

Full Error Trace is as follows:
2018-07-25 16:16:27.734555: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 AVX512F FMA
I0725 16:16:28.569362 139892177725248 tf_logging.py:115] Benchmark run: {'model_name': 'transformer', 'dataset': {'name': 'wmt_translate_ende'}, 'machine_config': {'cpu_info': {'num_cores': 112, 'cpu_info': 'Intel(R) Xeon(R) Platinum 8180 CPU @ 2.50GHz', 'mhz_per_cpu': 2500.0}, 'gpu_info': {'count': 0}, 'memory_total': 201245483008, 'memory_available': 179603804160}, 'test_id': None, 'run_date': '2018-07-25T23:16:27.746402Z', 'tensorflow_version': {'version': '1.9.0', 'git_hash': 'v1.9.0-0-g25c197e023'}, 'tensorflow_environment_variables': [], 'run_parameters': [{'name': 'allow_ffn_pad', 'bool_value': 'True'}, {'name': 'alpha', 'float_value': 0.6}, {'name': 'attention_dropout', 'float_value': 0.1}, {'name': 'batch_size', 'long_value': 32768}, {'name': 'beam_size', 'long_value': 4}, {'name': 'data_dir', 'string_value': '$HOME/transformer/data'}, {'name': 'default_batch_size', 'long_value': 2048}, {'name': 'default_batch_size_tpu', 'long_value': 32768}, {'name': 'extra_decode_length', 'long_value': 50}, {'name': 'filter_size', 'long_value': 2048}, {'name': 'hidden_size', 'long_value': 512}, {'name': 'initializer_gain', 'float_value': 1.0}, {'name': 'label_smoothing', 'float_value': 0.1}, {'name': 'layer_postprocess_dropout', 'float_value': 0.1}, {'name': 'learning_rate', 'float_value': 2.0}, {'name': 'learning_rate_decay_rate', 'float_value': 1.0}, {'name': 'learning_rate_warmup_steps', 'long_value': 16000}, {'name': 'max_length', 'long_value': 256}, {'name': 'model_dir', 'string_value': '$HOME/logs/transformer/model_base'}, {'name': 'num_heads', 'long_value': 8}, {'name': 'num_hidden_layers', 'long_value': 6}, {'name': 'num_parallel_calls', 'long_value': 112}, {'name': 'optimizer_adam_beta1', 'float_value': 0.9}, {'name': 'optimizer_adam_beta2', 'float_value': 0.997}, {'name': 'optimizer_adam_epsilon', 'float_value': 1e-09}, {'name': 'relu_dropout', 'float_value': 0.1}, {'name': 'repeat_dataset', 'long_value': 1}, {'name': 'static_batch', 'bool_value': 'False'}, {'name': 'tpu', 'string_value': 'None'}, {'name': 'use_synthetic_data', 'bool_value': 'False'}, {'name': 'use_tpu', 'bool_value': 'False'}, {'name': 'vocab_size', 'long_value': 33708}]}
I0725 16:16:32.669298 139892177725248 tf_logging.py:115] Using config: {'_model_dir': '$HOME/logs/transformer/model_base', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': None, '_save_checkpoints_secs': 600, '_session_config': None, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': 100, '_train_distribute': , '_device_fn': None, '_service': None, '_cluster_spec': , '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1}
I0725 16:16:32.672403 139892177725248 tf_logging.py:115] Training schedule:
I0725 16:16:32.672613 139892177725248 tf_logging.py:115] 1. Train for 1 epochs.
I0725 16:16:32.672733 139892177725248 tf_logging.py:115] 2. Evaluate model.
I0725 16:16:32.672834 139892177725248 tf_logging.py:115] 3. Compute BLEU score.
I0725 16:16:32.672941 139892177725248 tf_logging.py:115] Repeat above steps until the BLEU score reaches 25.000000
I0725 16:16:32.675400 139892177725248 tf_logging.py:115] Starting iteration 1
I0725 16:16:32.763708 139892177725248 tf_logging.py:115] Calling model_fn.
$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/tensorflow/python/ops/gradients_impl.py:100: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.
"Converting sparse IndexedSlices to a dense Tensor of unknown shape. "
I0725 16:16:45.845091 139892177725248 tf_logging.py:115] Done calling model_fn.
I0725 16:16:46.302198 139892177725248 tf_logging.py:115] Create CheckpointSaverHook.
I0725 16:16:49.258668 139892177725248 tf_logging.py:115] Graph was finalized.
I0725 16:16:49.283038 139892177725248 tf_logging.py:115] Restoring parameters from $HOME/logs/transformer/model_base/model.ckpt-0
I0725 16:16:53.075307 139892177725248 tf_logging.py:115] Running local_init_op.
I0725 16:16:53.208708 139892177725248 tf_logging.py:115] Done running local_init_op.
I0725 16:17:01.174737 139892177725248 tf_logging.py:115] Saving checkpoints for 0 into $HOME/logs/transformer/model_base/model.ckpt.
Traceback (most recent call last):
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1322, in _do_call
return fn(*args)
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1307, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1409, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[168,32] = 33748 is not in [0, 33708)
[[Node: model/Transformer/encode/embedding_shared_weights/embedding/Gather = ResourceGather[Tindices=DT_INT64, _class=["loc:@model...ad/Reshape"], dtype=DT_FLOAT, validate_indices=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](model/Transformer/embedding_shared_weights/embedding_and_softmax/weights, FunctionBufferingResourceGetNext)]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "transformer_main.py", line 632, in
absl_app.run(main)
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/absl/app.py", line 274, in run
_run_main(main, args)
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/absl/app.py", line 238, in _run_main
sys.exit(main(argv))
File "transformer_main.py", line 626, in main
run_transformer(flags.FLAGS)
File "transformer_main.py", line 608, in run_transformer
vocab_file=flags_obj.vocab_file)
File "transformer_main.py", line 332, in run_loop
hooks=train_hooks)
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 366, in train
loss = self._train_model(input_fn, hooks, saving_listeners)
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 1117, in _train_model
return self._train_model_distributed(input_fn, hooks, saving_listeners)
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 1253, in _train_model_distributed
saving_listeners)
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 1336, in _train_with_estimator_spec
_, loss = mon_sess.run([estimator_spec.train_op, estimator_spec.loss])
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 577, in run
run_metadata=run_metadata)
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 1053, in run
run_metadata=run_metadata)
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 1144, in run
raise six.reraise(original_exc_info)
File "$HOME/.local/lib/python3.6/site-packages/six.py", line 693, in reraise
raise value
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 1129, in run
return self._sess.run(
args, *kwargs)
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 1201, in run
run_metadata=run_metadata)
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/tensorflow/python/training/monitored_session.py", line 981, in run
return self._sess.run(
args, **kwargs)
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 900, in run
run_metadata_ptr)
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1135, in _run
feed_dict_tensor, options, run_metadata)
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1316, in _do_run
run_metadata)
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1335, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[168,32] = 33748 is not in [0, 33708)
[[Node: model/Transformer/encode/embedding_shared_weights/embedding/Gather = ResourceGather[Tindices=DT_INT64, _class=["loc:@model...ad/Reshape"], dtype=DT_FLOAT, validate_indices=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](model/Transformer/embedding_shared_weights/embedding_and_softmax/weights, FunctionBufferingResourceGetNext)]]

Caused by op 'model/Transformer/encode/embedding_shared_weights/embedding/Gather', defined at:
File "transformer_main.py", line 632, in
absl_app.run(main)
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/absl/app.py", line 274, in run
_run_main(main, args)
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/absl/app.py", line 238, in _run_main
sys.exit(main(argv))
File "transformer_main.py", line 626, in main
run_transformer(flags.FLAGS)
File "transformer_main.py", line 608, in run_transformer
vocab_file=flags_obj.vocab_file)
File "transformer_main.py", line 332, in run_loop
hooks=train_hooks)
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 366, in train
loss = self._train_model(input_fn, hooks, saving_listeners)
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 1117, in _train_model
return self._train_model_distributed(input_fn, hooks, saving_listeners)
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 1160, in _train_model_distributed
self.config)
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/tensorflow/python/training/distribute.py", line 794, in call_for_each_tower
return self._call_for_each_tower(fn, args, *kwargs)
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/tensorflow/contrib/distribute/python/one_device_strategy.py", line 77, in _call_for_each_tower
return fn(args, *kwargs)
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/tensorflow/python/estimator/estimator.py", line 1107, in _call_model_fn
model_fn_results = self._model_fn(features=features, *kwargs)
File "transformer_main.py", line 78, in model_fn
logits = model(inputs, targets)
File "$HOME/models/official/transformer/model/transformer.py", line 91, in __call__
encoder_outputs = self.encode(inputs, attention_bias)
File "$HOME/models/official/transformer/model/transformer.py", line 114, in encode
embedded_inputs = self.embedding_softmax_layer(inputs)
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/tensorflow/python/layers/base.py", line 329, in __call__
outputs = super(Layer, self).__call__(inputs, *args, *
kwargs)
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py", line 703, in __call__
outputs = self.call(inputs, args, *kwargs)
File "$HOME/models/official/transformer/model/embedding_layer.py", line 76, in call
embeddings = tf.gather(self.shared_weights, x)
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/tensorflow/python/ops/array_ops.py", line 2664, in gather
return params.sparse_read(indices, name=name)
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/tensorflow/python/ops/resource_variable_ops.py", line 767, in sparse_read
self._handle, indices, dtype=self._dtype, name=name)
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/tensorflow/python/ops/gen_resource_variable_ops.py", line 586, in resource_gather
validate_indices=validate_indices, name=name)
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3414, in create_op
op_def=op_def)
File "$HOME/miniconda3/envs/models-cpu/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1740, in __init__
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access

InvalidArgumentError (see above for traceback): indices[168,32] = 33748 is not in [0, 33708)
[[Node: model/Transformer/encode/embedding_shared_weights/embedding/Gather = ResourceGather[Tindices=DT_INT64, _class=["loc:@model...ad/Reshape"], dtype=DT_FLOAT, validate_indices=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](model/Transformer/embedding_shared_weights/embedding_and_softmax/weights, FunctionBufferingResourceGetNext)]]

official support

All 15 comments

Hi @robieta , a simple fix is in the PR https://github.com/tensorflow/models/pull/4974

The issue is also reported as follows. And there is a PR from @tremblerz and discussion about it.
https://github.com/mlperf/reference/issues/110
https://github.com/mlperf/reference/pull/64

Hello Everyone,

Am using keras 2.2.4 over tensorflow 1.5.0. Am trying to use the keras provided embedding layer on text. My vocab size is 23623 words and each has a unique index. Am trying embed each word into 50 dimensional vectors.

I get the following error after running an embedding layer as;

Embedding(23624, 50, input_length=5, trainable=False)

InvalidArgumentError (see above for traceback): indices[6,4] = 23624 is not in [0, 23624)
[[Node: embedding_1/embedding_lookup = Gather[Tindices=DT_INT32, Tparams=DT_FLOAT, _class=["loc:@embedding_1/embeddings"], validate_indices=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](embedding_1/embeddings/read, embedding_1/Cast)]]

Each datapoint here is a number(index). Upon checking indices[6,4] I found the following

print(ar_train_data[6,4])
5088

ar_train_data is an array of shape (162896, 5) where each value is between [0, 23624).
The training stops towards the end of the first epoch with the error above.

Am amazed! 5088 is no where out of range for [0, 23624).
The solution above suggests to increase the vocab_size. Am not sure if it will work in my case. If it can, can you suggest by what value?
Can anyone suggest what could be the issue here?

Please suggest if additional code snippets are required for clarity.
The model roughly goes below as:

inputs =  Input(shape=(None,),dtype='float32')
               <Embedding layer>
               <Convolution layer>
linear_output = Dense(10,input_shape=(72,),activation='relu')(linear_input)

model = Model(inputs=[inputs],outputs=[linear_output])
model.compile(loss='categorical_crossentropy', optimizer='nadam')

Keras version - 2.2.4
tensorflow version: 1.5.0

Full error trace:


InvalidArgumentError Traceback (most recent call last)
/opt/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py in _do_call(self, fn, args)
1349 try:
-> 1350 return fn(
args)
1351 except errors.OpError as e:

/opt/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py in _run_fn(session, feed_dict, fetch_list, target_list, options, run_metadata)
1328 feed_dict, fetch_list, target_list,
-> 1329 status, run_metadata)
1330

/opt/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/errors_impl.py in __exit__(self, type_arg, value_arg, traceback_arg)
472 compat.as_text(c_api.TF_Message(self.status.status)),
--> 473 c_api.TF_GetCode(self.status.status))
474 # Delete the underlying status object from memory otherwise it stays alive

InvalidArgumentError: indices[6,4] = 23624 is not in [0, 23624)
[[Node: embedding_8/embedding_lookup = Gather[Tindices=DT_INT32, Tparams=DT_FLOAT, _class=["loc:@embedding_8/embeddings"], validate_indices=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](embedding_8/embeddings/read, embedding_8/Cast)]]

During handling of the above exception, another exception occurred:

InvalidArgumentError Traceback (most recent call last)
in ()
----> 1 model.fit(ar_train_data,train_label,validation_data=(ar_valid_data,valid_label),epochs=10,batch_size=batch_size)

/opt/anaconda3/lib/python3.6/site-packages/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, **kwargs)
1037 initial_epoch=initial_epoch,
1038 steps_per_epoch=steps_per_epoch,
-> 1039 validation_steps=validation_steps)
1040
1041 def evaluate(self, x=None, y=None,

/opt/anaconda3/lib/python3.6/site-packages/keras/engine/training_arrays.py in fit_loop(model, f, ins, out_labels, batch_size, epochs, verbose, callbacks, val_f, val_ins, shuffle, callback_metrics, initial_epoch, steps_per_epoch, validation_steps)
210 val_outs = test_loop(model, val_f, val_ins,
211 batch_size=batch_size,
--> 212 verbose=0)
213 val_outs = to_list(val_outs)
214 # Same labels assumed.

/opt/anaconda3/lib/python3.6/site-packages/keras/engine/training_arrays.py in test_loop(model, f, ins, batch_size, verbose, steps)
390 ins_batch[i] = ins_batch[i].toarray()
391
--> 392 batch_outs = f(ins_batch)
393 if isinstance(batch_outs, list):
394 if batch_index == 0:

/opt/anaconda3/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py in __call__(self, inputs)
2719 'In order to feed symbolic tensors to a Keras model '
2720 'in TensorFlow, you need tensorflow 1.8 or higher.')
-> 2721 return self._legacy_call(inputs)
2722
2723

/opt/anaconda3/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py in _legacy_call(self, inputs)
2691 session = get_session()
2692 updated = session.run(fetches=fetches, feed_dict=feed_dict,
-> 2693 **self.session_kwargs)
2694 return updated[:len(self.outputs)]
2695

/opt/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py in run(self, fetches, feed_dict, options, run_metadata)
893 try:
894 result = self._run(None, fetches, feed_dict, options_ptr,
--> 895 run_metadata_ptr)
896 if run_metadata:
897 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)

/opt/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py in _run(self, handle, fetches, feed_dict, options, run_metadata)
1126 if final_fetches or final_targets or (handle and feed_dict_tensor):
1127 results = self._do_run(handle, final_targets, final_fetches,
-> 1128 feed_dict_tensor, options, run_metadata)
1129 else:
1130 results = []

/opt/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py in _do_run(self, handle, target_list, fetch_list, feed_dict, options, run_metadata)
1342 if handle is None:
1343 return self._do_call(_run_fn, self._session, feeds, fetches, targets,
-> 1344 options, run_metadata)
1345 else:
1346 return self._do_call(_prun_fn, self._session, handle, feeds, fetches)

/opt/anaconda3/lib/python3.6/site-packages/tensorflow/python/client/session.py in _do_call(self, fn, *args)
1361 except KeyError:
1362 pass
-> 1363 raise type(e)(node_def, op, message)
1364
1365 def _extend_graph(self):

InvalidArgumentError: indices[6,4] = 23624 is not in [0, 23624)
[[Node: embedding_8/embedding_lookup = Gather[Tindices=DT_INT32, Tparams=DT_FLOAT, _class=["loc:@embedding_8/embeddings"], validate_indices=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](embedding_8/embeddings/read, embedding_8/Cast)]]

Caused by op 'embedding_8/embedding_lookup', defined at:
File "/opt/anaconda3/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/opt/anaconda3/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/opt/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py", line 16, in
app.launch_new_instance()
File "/opt/anaconda3/lib/python3.6/site-packages/traitlets/config/application.py", line 658, in launch_instance
app.start()
File "/opt/anaconda3/lib/python3.6/site-packages/ipykernel/kernelapp.py", line 478, in start
self.io_loop.start()
File "/opt/anaconda3/lib/python3.6/site-packages/zmq/eventloop/ioloop.py", line 177, in start
super(ZMQIOLoop, self).start()
File "/opt/anaconda3/lib/python3.6/site-packages/tornado/ioloop.py", line 888, in start
handler_func(fd_obj, events)
File "/opt/anaconda3/lib/python3.6/site-packages/tornado/stack_context.py", line 277, in null_wrapper
return fn(args, *kwargs)
File "/opt/anaconda3/lib/python3.6/site-packages/zmq/eventloop/zmqstream.py", line 440, in _handle_events
self._handle_recv()
File "/opt/anaconda3/lib/python3.6/site-packages/zmq/eventloop/zmqstream.py", line 472, in _handle_recv
self._run_callback(callback, msg)
File "/opt/anaconda3/lib/python3.6/site-packages/zmq/eventloop/zmqstream.py", line 414, in _run_callback
callback(args, *kwargs)
File "/opt/anaconda3/lib/python3.6/site-packages/tornado/stack_context.py", line 277, in null_wrapper
return fn(args, *kwargs)
File "/opt/anaconda3/lib/python3.6/site-packages/ipykernel/kernelbase.py", line 283, in dispatcher
return self.dispatch_shell(stream, msg)
File "/opt/anaconda3/lib/python3.6/site-packages/ipykernel/kernelbase.py", line 233, in dispatch_shell
handler(stream, idents, msg)
File "/opt/anaconda3/lib/python3.6/site-packages/ipykernel/kernelbase.py", line 399, in execute_request
user_expressions, allow_stdin)
File "/opt/anaconda3/lib/python3.6/site-packages/ipykernel/ipkernel.py", line 208, in do_execute
res = shell.run_cell(code, store_history=store_history, silent=silent)
File "/opt/anaconda3/lib/python3.6/site-packages/ipykernel/zmqshell.py", line 537, in run_cell
return super(ZMQInteractiveShell, self).run_cell(args, *kwargs)
File "/opt/anaconda3/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 2728, in run_cell
interactivity=interactivity, compiler=compiler, result=result)
File "/opt/anaconda3/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 2850, in run_ast_nodes
if self.run_code(code, result):
File "/opt/anaconda3/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 2910, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "", line 25, in
sent_embed = Embedding(input_dim=vocab_size+1,output_dim=embedding_size,input_length=2WINDOW_SIZE+1,trainable=False)(sent_grams)
File "/opt/anaconda3/lib/python3.6/site-packages/keras/engine/base_layer.py", line 457, in __call__
output = self.call(inputs, *
kwargs)
File "/opt/anaconda3/lib/python3.6/site-packages/keras/layers/embeddings.py", line 141, in call
out = K.gather(self.embeddings, inputs)
File "/opt/anaconda3/lib/python3.6/site-packages/keras/backend/tensorflow_backend.py", line 1228, in gather
return tf.nn.embedding_lookup(reference, indices)
File "/opt/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/embedding_ops.py", line 325, in embedding_lookup
transform_fn=None)
File "/opt/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/embedding_ops.py", line 150, in _embedding_lookup_and_transform
result = _clip(_gather(params[0], ids, name=name), ids, max_norm)
File "/opt/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/embedding_ops.py", line 54, in _gather
return array_ops.gather(params, ids, name=name)
File "/opt/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/array_ops.py", line 2585, in gather
params, indices, validate_indices=validate_indices, name=name)
File "/opt/anaconda3/lib/python3.6/site-packages/tensorflow/python/ops/gen_array_ops.py", line 1864, in gather
validate_indices=validate_indices, name=name)
File "/opt/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/opt/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 3160, in create_op
op_def=op_def)
File "/opt/anaconda3/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1625, in __init__
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access

InvalidArgumentError (see above for traceback): indices[6,4] = 23624 is not in [0, 23624)
[[Node: embedding_8/embedding_lookup = Gather[Tindices=DT_INT32, Tparams=DT_FLOAT, _class=["loc:@embedding_8/embeddings"], validate_indices=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](embedding_8/embeddings/read, embedding_8/Cast)]]

Regards

Hey, did you find a solution? I am facing the same problem

I have a similar problem:
tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[52] = 16327 is not in [0, 15360)
[[{{node ROI/GatherV2_2}}]]
On the GPU everything works both on Ubuntu 1604 and on Windows10. On the CPU the same code fails on both systems. So it is not a CUDA problem, rather a non-CUDA problem.

I have the same problem,
InvalidArgumentError: indices[35,2] = 15384 is not in [0, 15384)[[{{node embedding_1/embedding_lookup}}]]

did anyone find a solution ?

I strangely solved the problem by "rescaling" my input features before the embedding layer, it worked for me.

Similar problem.
tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[96,29] = -1 is not in [0, 33501)
[[{{node embedding_1/embedding_lookup}}]]
Only an issue on CPU, not GPU.

InvalidArgumentError (see above for traceback): indices[0,2] = 86 is not in [0, 86)
         [[Node: embedding_1/embedding_lookup = Gather[Tindices=DT_INT32, Tparams=DT_FLOAT, _class=["loc:@embedding_1/embeddings"], validate_indices=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](embedding_1/embeddings/read, embedding_1/Cast)]]

same error, on CPU too

same error, on GPU though.

I define a var lang_tokenizer and fit it on my texts , to convert word to its index

lang_tokenizer = tf.keras.preprocessing.text.Tokenizer( filters='',oov_token='<unk>') lang_tokenizer.fit_on_texts(lang)
and found that

min(lang_tokenizer.index_word.keys()) is equal to 2, not 1

i faced similar issue.its not related to GPU or CPU.i have used different vocabulary size and it started working. before i was using glove and got error. so don't used pretrained model.now my code is working fine.basically its related to word embedding issue.

@oshlevy89 have you found a solution? I have an input data size of 30380 and embedding input size 30381 and I also get this error both on CPU and GPU. The index that is not found is also strangely -1 in my case and not some value larger than the data size, which could indicate an error there. I am using tensorflow-gpu 2.0.0 on a server and also tensorflow 1.8.0 locally with a CPU. I am using a simple feed forward network, not a transformer model though. This is part of my error stack. The first error with BaseCollectiveExecutor::StartAbort appears only on the GPU:
```
BaseCollectiveExecutor::StartAbort Invalid argument: in$
ices[15,1] = -1 is not in [0, 30381)
[[{{node sequential/embedding/embedding_lookup}}]]
[[VariableShape/_11]]
BaseCollectiveExecutor::StartAbort Invalid argument: indices[15,1] = -1 is not in [0, 30381)
[[{{node sequential/embedding/embedding_lookup}}]]
tensorflow.python.framework.errors_impl.InvalidArgumentError: 2 root error(s) found.
(0) Invalid argument: indices[15,1] = -1 is not in [0, 30381)
[[node sequential/embedding/embedding_lookup (defined at usr/local/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py:1751)
]]
[[VariableShape/_11]]
(1) Invalid argument: indices[15,1] = -1 is not in [0, 30381)
[[node sequential/embedding/embedding_lookup (defined at usr/local/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py:1751)
]]
0 successful operations.
0 derived errors ignored. [Op:__inference_distributed_function_1035]

Function call stack:
distributed_function -> distributed_function

@KonstantinaLazaridou this is no longer an issue for me but i can't say i remember the solution :/ I think it was a bug on my side with indexing. Sorry can't help.

Try to run the data_download.py again and it will tell you the total number of the tokens, in my case it is

I0319 16:47:20.076332 140067448858432 tokenizer.py:121] Generated vocabulary with 33945 subtokens.

and then modify the vocab_size in model_params.py may fix the problem.

Generated vocabulary with 33945 subtokens is the key to put inside the model_params.py
Due to python string handling change, even if we use the same code, the vocabulary is generated differently.
Closing. Thanks

Was this page helpful?
0 / 5 - 0 ratings