Vision: Unsupported operand type(s) for /: ‘list’ and ‘int’ when using transforms.scale() function vision

Created on 17 Apr 2017  Â·  18Comments  Â·  Source: pytorch/vision

Hi,
I wrote below code to read MSCOCO Detection images using DataLoader. In this code I would like to alter the size of the read images to (448,448). Here is the code:

import torchvision.datasets as dset
import torchvision.transforms as trans
import sys
sys.path.append('./coco-master/PythonAPI/')
from pycocotools.coco import COCO
import torch

det = dset.CocoDetection(root='./train2014',
annFile = './annotations/instances_train2014.json',
transform = trans.Compose([trans.Scale(size=[448,448]),trans.ToTensor(),
trans.Normalize((.5,.5,.5),(.5,.5,.5))]))
trainLoader = torch.utils.data.DataLoader(det, batch_size=16, num_workers=2)
trainItr = iter(trainLoader)
images, labels = trainItr.next()

But when I ran above code, below error has happened (:expressionless:)

TypeError Traceback (most recent call last)
in ()
----> 1 images, labels = trainItr.next()

/home/mohammad/anaconda3/lib/python3.5/site-packages/torch/utils/data/dataloader.py in next(self)
172 self.reorder_dict[idx] = batch
173 continue
--> 174 return self.processnext_batch(batch)
175
176 next = next # Python 2 compatibility

/home/mohammad/anaconda3/lib/python3.5/site-packages/torch/utils/data/dataloader.py in processnext_batch(self, batch)
196 self.putindices()
197 if isinstance(batch, ExceptionWrapper):
--> 198 raise batch.exc_type(batch.exc_msg)
199 return batch
200

TypeError: Traceback (most recent call last):
File "/home/mohammad/anaconda3/lib/python3.5/site-packages/torch/utils/data/dataloader.py", line 34, in workerloop
samples = collate_fn([dataset[i] for i in batch_indices])
File "/home/mohammad/anaconda3/lib/python3.5/site-packages/torch/utils/data/dataloader.py", line 34, in
samples = collate_fn([dataset[i] for i in batch_indices])
File "/home/mohammad/anaconda3/lib/python3.5/site-packages/torchvision-0.1.8-py3.5.egg/torchvision/datasets/coco.py", line 59, in getitem
img = self.transform(img)
File "/home/mohammad/anaconda3/lib/python3.5/site-packages/torchvision-0.1.8-py3.5.egg/torchvision/transforms.py", line 29, in call
img = t(img)
File "/home/mohammad/anaconda3/lib/python3.5/site-packages/torchvision-0.1.8-py3.5.egg/torchvision/transforms.py", line 139, in call
ow = int(self.size * w / h)
TypeError: unsupported operand type(s) for /: 'list' and 'int'

Could you please tell me what should I do? :innocent:

Most helpful comment

@mohammad-py can you try:

pip install https://github.com/pytorch/vision/archive/master.zip

All 18 comments

do not double post. Either ask a question on discuss or open an issue.
Closing as duplicate: https://discuss.pytorch.org/t/unsupported-operand-type-s-for-list-and-int-when-using-transforms-scale-function/1927

@soumith, Sorry. I posted a question there, but no answer has happened. I just want to know the answer as rapidly as possible.

You need to update torchvision.

@fmassa pytorch and torchvision versions are:

pytorch---->0.1.11------>py35_5----->soumith
torchvision------------->0.1.8----------->py35_2------>soumith

@mohammad-py can you try:

pip install https://github.com/pytorch/vision/archive/master.zip

@soumith I installed my torchvision and pytorch using conda!

i understand, but try the command above

@soumith sure. give me some time to try it. I will notify you. thanks.

@soumith. I ran above code. It seems something has installed. But when I ran my code, the error happened again:

index created!
Traceback (most recent call last):
File "Main.py", line 14, in
images, labels = trainItr.next()
File "/home/mohammad/anaconda3/lib/python3.5/site-packages/torch/utils/data/dataloader.py", line 174, in __next__
return self._process_next_batch(batch)
File "/home/mohammad/anaconda3/lib/python3.5/site-packages/torch/utils/data/dataloader.py", line 198, in _process_next_batch
raise batch.exc_type(batch.exc_msg)
TypeError: Traceback (most recent call last):
File "/home/mohammad/anaconda3/lib/python3.5/site-packages/torch/utils/data/dataloader.py", line 34, in _worker_loop
samples = collate_fn([dataset[i] for i in batch_indices])
File "/home/mohammad/anaconda3/lib/python3.5/site-packages/torch/utils/data/dataloader.py", line 34, in
samples = collate_fn([dataset[i] for i in batch_indices])
File "/home/mohammad/anaconda3/lib/python3.5/site-packages/torchvision-0.1.8-py3.5.egg/torchvision/datasets/coco.py", line 59, in __getitem__
img = self.transform(img)
File "/home/mohammad/anaconda3/lib/python3.5/site-packages/torchvision-0.1.8-py3.5.egg/torchvision/transforms.py", line 29, in __call__
img = t(img)
File "/home/mohammad/anaconda3/lib/python3.5/site-packages/torchvision-0.1.8-py3.5.egg/torchvision/transforms.py", line 139, in __call__
ow = int(self.size * w / h)
TypeError: unsupported operand type(s) for /: 'list' and 'int'

a workaround for you might be to change this line:

transform = trans.Compose([trans.Scale(size=[448,448]),trans.ToTensor(), trans.Normalize((.5,.5,.5),(.5,.5,.5))]))

to

transform = trans.Compose([trans.Scale(size=448),trans.ToTensor(), trans.Normalize((.5,.5,.5),(.5,.5,.5))]))

I'll look into what's happening.

@soumith: and here is the above change:

Traceback (most recent call last):
File "Main.py", line 15, in
images, labels = trainItr.next()
File "/home/mohammad/anaconda3/lib/python3.5/site-packages/torch/utils/data/dataloader.py", line 174, in __next__
return self._process_next_batch(batch)
File "/home/mohammad/anaconda3/lib/python3.5/site-packages/torch/utils/data/dataloader.py", line 198, in _process_next_batch
raise batch.exc_type(batch.exc_msg)
RuntimeError: Traceback (most recent call last):
File "/home/mohammad/anaconda3/lib/python3.5/site-packages/torch/utils/data/dataloader.py", line 34, in _worker_loop
samples = collate_fn([dataset[i] for i in batch_indices])
File "/home/mohammad/anaconda3/lib/python3.5/site-packages/torch/utils/data/dataloader.py", line 79, in default_collate
return [default_collate(samples) for samples in transposed]
File "/home/mohammad/anaconda3/lib/python3.5/site-packages/torch/utils/data/dataloader.py", line 79, in
return [default_collate(samples) for samples in transposed]
File "/home/mohammad/anaconda3/lib/python3.5/site-packages/torch/utils/data/dataloader.py", line 66, in default_collate
return torch.stack(batch, 0)
File "/home/mohammad/anaconda3/lib/python3.5/site-packages/torch/functional.py", line 56, in stack
return torch.cat(list(t.unsqueeze(dim) for t in sequence), dim)
RuntimeError: inconsistent tensor sizes at /py/conda-bld/pytorch_1490979338030/work/torch/lib/TH/generic/THTensorMath.c:2548

I just checked this on master, you DID NOT update your torchvision.

pip uninstall -y torchvision
pip uninstall -y torchvision # yes run the command again
pip install https://github.com/pytorch/vision/archive/master.zip

Then use your original transform:

transform = trans.Compose([trans.Scale(size=[448,448]),trans.ToTensor(), trans.Normalize((.5,.5,.5),(.5,.5,.5))]))

@soumith, Thanks. The problem was solved. :+1:

as an advice, please learn how to do basic debugging. This is an open source project, so you wont have someone available to help you all the time.

@soumith okay sure. Thanks by the way. I am so new in python and pytorch too.

@soumith. Another problem that I don't know how to solve it has happened. After success to load the real images based on transformation defined above, my label variable is empty. sorry for this question but I am little new. Could you please help? here is the output of below command:
print(labels)

[[('image_id', 'image_id', 'image_id', 'image_id', 'image_id', 'image_id', 'image_id', 'image_id', 'image_id', 'image_id', 'image_id', 'image_id', 'image_id', 'image_id', 'image_id', 'image_id'), ('iscrowd', 'iscrowd', 'iscrowd', 'iscrowd', 'iscrowd', 'iscrowd', 'iscrowd', 'iscrowd', 'iscrowd', 'iscrowd', 'iscrowd', 'iscrowd', 'iscrowd', 'iscrowd', 'iscrowd', 'iscrowd'), ('category_id', 'category_id', 'category_id', 'category_id', 'category_id', 'category_id', 'category_id', 'category_id', 'category_id', 'category_id', 'category_id', 'category_id', 'category_id', 'category_id', 'category_id', 'category_id'), ('segmentation', 'segmentation', 'segmentation', 'segmentation', 'segmentation', 'segmentation', 'segmentation', 'segmentation', 'segmentation', 'segmentation', 'segmentation', 'segmentation', 'segmentation', 'segmentation', 'segmentation', 'segmentation'), ('area', 'area', 'area', 'area', 'area', 'area', 'area', 'area', 'area', 'area', 'area', 'area', 'area', 'area', 'area', 'area'), ('id', 'id', 'id', 'id', 'id', 'id', 'id', 'id', 'id', 'id', 'id', 'id', 'id', 'id', 'id', 'id'), ('bbox', 'bbox', 'bbox', 'bbox', 'bbox', 'bbox', 'bbox', 'bbox', 'bbox', 'bbox', 'bbox', 'bbox', 'bbox', 'bbox', 'bbox', 'bbox')]]

Please use the forums forbthis kind of questions. I'll help you there

@fmassa sure, I will post there. thank you!

Was this page helpful?
0 / 5 - 0 ratings

Related issues

zhang-zhenyu picture zhang-zhenyu  Â·  3Comments

300LiterPropofol picture 300LiterPropofol  Â·  3Comments

Abolfazl-Mehranian picture Abolfazl-Mehranian  Â·  3Comments

chinglamchoi picture chinglamchoi  Â·  3Comments

varagrawal picture varagrawal  Â·  3Comments