train_data = VOCSegmentation(root='/home/', year='2007', image_set='train',transforms=
transforms.ToTensor())
You are passing the image transform ToTensor as a joint transform for images and targets:
If you don't need a transform for your target, simply change the instantiation call to
VOCSegmentation(root='/home/', year='2007', image_set='train', transform=transforms.ToTensor())
Note the change from transforms to transform.
@pmeier Thank you!!!!
train_data = VOCSegmentation(root='/home/hanwei-1/', year='2007', image_set='train',
transform= transforms.ToTensor(),
download=False)
this did not work
TypeError: batch must contain tensors, numbers, dicts or lists; found <class 'PIL.PngImagePlugin.PngImageFile'>
I think need to transform img and target both, so use this call and work perfect
train_data = VOCSegmentation(root='/home/hanwei-1/', year='2007', image_set='train',
transform= transforms.ToTensor(),
target_transform=transforms.ToTensor(),
download=False)
transforms is a bug
this did not work
Could you please post the actual code where this error occurred? Simply doing this works fine
from torchvision.datasets import VOCSegmentation
from torchvision import transforms
dataset = VOCSegmentation(VOC_ROOT, year="2007", image_set="train",
transform=transforms.ToTensor())
image, target = dataset[0]
print(type(image), image.size())
print(type(target), target.size)
and prints:
<class 'torch.Tensor'> torch.Size([3, 281, 500])
<class 'PIL.PngImagePlugin.PngImageFile'> (500, 281)
transformsis a bug
No, it is not. It is intended as joint transform and thus cannot be used with a single image transfomr such as ToTensor(). If you use the parameters transform and target_transform they are internally converted into a joint transform:
which is then used in __getitem__():
I want to use one argument transforms to complete transform img and target
But this argument cannot work
You still haven't shown us any code. Please do that.
Do you want to have the same transformation applied to both the image and target? If yes, you could simply do
transform = transforms.ToTensor()
dataset = VOCSegmentation(VOC_ROOT, year="2007", image_set="train",
transform=transform, target_transform=transform)
Yes
I use the same transformation
train_data = VOCSegmentation(root='/home/', year='2007', image_set='train',
transforms=transforms.Compose([transforms.Resize((100, 100)),
transforms.ToTensor()]),
target_transform=transforms.Compose([transforms.Resize((100, 100)),
transforms.ToTensor()]))
train_loader = torch.utils.data.DataLoader(train_data, batch_size=2)
I'm still lost: this does not throw an error. Do you still have a problem or is everything working as expected?
On a site note:
from torch.utils.data import DataLoader
from torchvision.datasets import VOCSegmentation
from torchvision import transforms
transform = transforms.Compose([
transforms.Resize((100, 100)),
transforms.ToTensor(),
])
dataset = VOCSegmentation('~/Downloads', year="2007", image_set="train",
transform=transform, target_transform=transform)
loader = DataLoader(dataset, batch_size=2)
for input, target in loader:
print(input.size(), target.size())
(100, 100)? Since this ignores the aspect ratio of the image, the motif could be heavily distorted. Especially on masks this could have some unintended side effects. You probably want something like transforms.CenterCrop() to get all images to the same size.@pmeier Thank you so much!
Use transforms argument will throw an error.
We need special transforms for joint image and target transform, see https://github.com/pytorch/vision/blob/master/references/segmentation/transforms.py for an example.
This will be particularly important for randomized transforms, such as random flip.
Closing as it is standard behavior for now
@fmassa @pmeier Thank you so much!
We need special transforms for joint image and target transform, see https://github.com/pytorch/vision/blob/master/references/segmentation/transforms.py for an example.
This will be particularly important for randomized transforms, such as random flip.
Closing as it is standard behavior for now
how to use it?
import torchvision.references.segmentation.transforms
not work
@LetsGoFir This cannot work since references is not part of the torchvision package. What @fmassa meant is that you can find examples on how to use the joint transforms in the file transforms.py which is located in the references/detection folder relative to the project root but not the package root.
If you have further questions, and they don't apply exactly to the topic within this issue, please open a new one. This helps future users with problems to find what they are looking for without us pointing them towards it.
@LetsGoFir This cannot work since
referencesis not part of thetorchvisionpackage. What @fmassa meant is that you can find examples on how to use the joint transforms in the filetransforms.pywhich is located in thereferences/detectionrelative to the project root but not the package root.If you have further questions, and they don't apply exactly to the topic within this issue, please open a new one. This helps future users with problems to find what they are looking for without us pointing them towards it.
Thanks, I just copy it to my code and it works!
Thanks for the help @pmeier !