Vision: engine.py error while following tutorial

Created on 30 Apr 2020  路  5Comments  路  Source: pytorch/vision

馃摎 Documentation

I have found this library via the examples at https://pytorch.org/tutorials/intermediate/torchvision_tutorial.html

I ran the google colab and that successfully finished.. however when I go to copy master from here and use it locally I am getting an error..

Engine.py line
image

image

Error

` train_one_epoch(model, optimizer, training_data_loader, device, epoch, print_freq=model_conf["hyperParameters"]["display"]) File "/home/emcp/Dev/git/EMCP/faster-rcnn-torchvision/model_components/model/engine.py", line 27, in train_one_epoch images = list(image.to(device) for image in images) File "/home/emcp/Dev/git/EMCP/faster-rcnn-torchvision/model_components/model/engine.py", line 27, in <genexpr> images = list(image.to(device) for image in images) AttributeError: 'Image' object has no attribute 'to'
seems to be an image did not load perhaps ?

models reference scripts question object detection

Most helpful comment

@EMCP You need to feed a torch.Tensor to the model. Your Dataset is returning a PIL Image, that's why it's not working. We convert the PIL Image into a torch Tensor in the dataset (via the transform.

All 5 comments

When I look at my DataLoader it seems to be giving correct items

The image is COCO annotated just FYI

(<PIL.Image.Image image mode=RGB size=2880x1880 at 0x7F06E2CCAAD0>, [{'id': 847, 'image_id': 3135, 'category_id': 6, 'segmentation': [[90.0, 36.0, 2874.8, 35.7, 2880.0, 40.1, 2880.0, 1795.3, 2875.3, 1800.0, 3.6, 1799.7, 0.4, 1795.5, 0.2, 41.2, 6.1, 35.7, 90.0, 36.1]], 'area': 5080276, 'bbox': [0.0, 36.0, 2880.0, 1764.0], 'iscrowd': False, 'color': '#d5e47c', 'metadata': {}}, {'id': 848, 'image_id': 3135, 'category_id': 7, 'segmentation': [[666.9, 1799.9, 2239.9, 1800.0, 2245.7, 1805.4, 2246.0, 1879.8, 634.8, 1879.9, 634.2, 1805.9, 639.4, 1799.9, 667.3, 1800.2]], 'area': 128892, 'bbox': [634.0, 1800.0, 1612.0, 80.0], 'iscrowd': False, 'color': '#6c3ee0', 'metadata': {}}])

@EMCP You need to feed a torch.Tensor to the model. Your Dataset is returning a PIL Image, that's why it's not working. We convert the PIL Image into a torch Tensor in the dataset (via the transform.

big thank you.. Feels like I'm almost there then with it

I'm having the exact same issue here... Could you be more precise about what needs to be changed? I checked 10 times, I did not miss one step from the tutorial.

Okay, I found it. While doing the tutorial, I had commented these lines, inside PennFudanDataSet, because they created problems for early testing :

if self.transforms is not None:
            img, target = self.transforms(img, target)

Un-commenting these solved the issue (and, of course, led to a new one).

What I had not realized earlier in the tutorial, and led me to wrongly comment those two lines, is that you can easily test the PennFudanDataSet by calling it with None as a second argument:
dataset = PennFudanDataset('PennFudanPed', None)

Was this page helpful?
0 / 5 - 0 ratings