Pytorch: DataLoader raising inconsistent tensor sizes

Created on 25 Jun 2017  路  1Comment  路  Source: pytorch/pytorch

This will trigger the error:

testset = dset.ImageFolder(root='data/test', transform=transform_test)
test_loader = DataLoader(testset, batch_size=8, shuffle=True, num_workers=2)
loader = iter(test_loader)
next(loader)

Traceback:

RuntimeError: Traceback (most recent call last):
  File "/home/ubuntu/anaconda3/lib/python3.5/site-packages/torch/utils/data/dataloader.py", line 40, in _worker_loop
    samples = collate_fn([dataset[i] for i in batch_indices])
  File "/home/ubuntu/anaconda3/lib/python3.5/site-packages/torch/utils/data/dataloader.py", line 109, in default_collate
    return [default_collate(samples) for samples in transposed]
  File "/home/ubuntu/anaconda3/lib/python3.5/site-packages/torch/utils/data/dataloader.py", line 109, in <listcomp>
    return [default_collate(samples) for samples in transposed]
  File "/home/ubuntu/anaconda3/lib/python3.5/site-packages/torch/utils/data/dataloader.py", line 91, in default_collate
    return torch.stack(batch, 0, out=out)
  File "/home/ubuntu/anaconda3/lib/python3.5/site-packages/torch/functional.py", line 66, in stack
    return torch.cat(inputs, dim, out=out)
RuntimeError: inconsistent tensor sizes at /home/ubuntu/pytorch/torch/lib/TH/generic/THTensorMath.c:2625

Running today's Pytorch (32e666551a695cb0fee8c70474a13f3def042bb2).

Since the code seems trivial, it is probably related to the context. Any ideas on where to look?

Update: I have noticed that changing transform_test made the error go away, looking into that now.

Loading data with this transform fails:

self.input_size = 224
self.transform_test = transforms.Compose([
    transforms.Scale(self.input_size),
    transforms.ToTensor(),
    transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
])

but the following transforms does not cause an error:

self.transform_test = transforms.Compose([
    transforms.Scale((self.input_w, self.input_h)),
    transforms.ToTensor(),
    transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5)),
])

Most helpful comment

Ah, this comes down to Scale not scaling to a square but maintaining ratio, hence different sized input images will not fit when catting a tensor..

>All comments

Ah, this comes down to Scale not scaling to a square but maintaining ratio, hence different sized input images will not fit when catting a tensor..

Was this page helpful?
0 / 5 - 0 ratings

Related issues

mishraswapnil picture mishraswapnil  路  3Comments

eliabruni picture eliabruni  路  3Comments

a1363901216 picture a1363901216  路  3Comments

NgPDat picture NgPDat  路  3Comments

soumith picture soumith  路  3Comments