Fairseq: RuntimeError: Creating MTGP constants failed. at /pytorch/aten/src/THC/THCTensorRandom.cu:33

Created on 15 May 2019  路  3Comments  路  Source: pytorch/fairseq

Hi fairseq team, i'm using fairseq to create translation model and i want to custom fairseq to add linguistic factors for each training sentence as model input. I've custom read_data method of class IndexedRawDataset as below:
image
Then, I've created custom_lstm model. In fact, I created a copy of lstm model and custom it as image below:
image
The error ocurrs when i try to train the model with script:

!CUDA_VISIBLE_DEVICES=0 fairseq-train fairseq/data-bin --lr 1.0 --dropout 0.2 \
--max-tokens 4000 --arch custom_lstm \
--optimizer sgd --max-epoch=12 \
--save-dir checkpoints/lstm --dataset-impl raw

Here error message:

/pytorch/aten/src/THC/THCTensorIndex.cu:362: void indexSelectLargeIndex(TensorInfo, TensorInfo, TensorInfo, int, int, IndexType, IndexType, long) [with T = float, IndexType = unsigned int, DstDim = 2, SrcDim = 2, IdxDim = -2, IndexIsMajor = true]: block: [71,0,0], thread: [127,0,0] Assertion srcIndex < srcSelectDimSize failed.
Traceback (most recent call last):
File "/usr/local/bin/fairseq-train", line 11, in
load_entry_point('fairseq', 'console_scripts', 'fairseq-train')()
File "/content/fairseq/fairseq_cli/train.py", line 313, in cli_main
main(args)
File "/content/fairseq/fairseq_cli/train.py", line 99, in main
train(args, trainer, task, epoch_itr)
File "/content/fairseq/fairseq_cli/train.py", line 137, in train
log_output = trainer.train_step(samples)
File "/content/fairseq/fairseq/trainer.py", line 242, in train_step
raise e
File "/content/fairseq/fairseq/trainer.py", line 219, in train_step
ignore_grad
File "/content/fairseq/fairseq/tasks/fairseq_task.py", line 230, in train_step
loss, sample_size, logging_output = criterion(model, sample)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 493, in __call__
result = self.forward(input, kwargs)
File "/content/fairseq/fairseq/criterions/cross_entropy.py", line 30, in forward
net_output = model(
sample['net_input'])
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 493, in __call__
result = self.forward(
input, *kwargs)
File "/content/fairseq/fairseq/models/fairseq_model.py", line 179, in forward
encoder_out = self.encoder(src_tokens, src_lengths)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 493, in __call__
result = self.forward(
input, **kwargs)
File "/content/fairseq/fairseq/models/custom_lstm.py", line 241, in forward
x = F.dropout(torch.cat((emb_tokens, emb_tags), dim=2), p=self.dropout_in, training=self.training)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py", line 830, in dropout
else _VF.dropout(input, p, training))
RuntimeError: Creating MTGP constants failed. at /pytorch/aten/src/THC/THCTensorRandom.cu:33

Are there something wrong with dropout ?

Most helpful comment

Now i'm using pretrained embedding for my task instead customize fairseq. Maybe you're correct, i think there're problems with embed dim when i concated word embed with linguistic factor embed. Sry can't help you much :(

All 3 comments

Are you still having this issue? It looks like others have gotten this error when they try to index the embedding table and the indices are out of bounds [[1](https://discuss.pytorch.org/t/solved-creating-mtgp-constants-failed-error/15084/2)]

Now i'm using pretrained embedding for my task instead customize fairseq. Maybe you're correct, i think there're problems with embed dim when i concated word embed with linguistic factor embed. Sry can't help you much :(

If you run this on CPUs, the error will probably throw from the correct line.
Of course, there's nothing wrong with dropout :)

Was this page helpful?
0 / 5 - 0 ratings