Vision: Can't augment numpy image

Created on 8 Jan 2018  路  4Comments  路  Source: pytorch/vision

I'm trying to augment a small patch using vision.transforms

anchors = np.zeros([4*2+1, 4*2+1, 1]).astype(np.int32)
anchors = transforms.ToPILImage('I')(anchors)
transforms.ColorJitter(brightness=0.2)(anchors)

but i'm stucked with:
ValueError: image has wrong mode

This is the stack trace:

ValueError                                Traceback (most recent call last)
<ipython-input-39-402688208231> in <module>()
      1 anchors = np.zeros([4*2+1, 4*2+1, 1]).astype(np.int32)
      2 anchors = transforms.ToPILImage('I')(anchors)
----> 3 transforms.ColorJitter(brightness=0.2)(anchors)

/opt/miniconda2/envs/mypython3/lib/python3.6/site-packages/torchvision/transforms/transforms.py in __call__(self, img)
    577         transform = self.get_params(self.brightness, self.contrast,
    578                                     self.saturation, self.hue)
--> 579         return transform(img)
    580 
    581 

/opt/miniconda2/envs/mypython3/lib/python3.6/site-packages/torchvision/transforms/transforms.py in __call__(self, img)
     40     def __call__(self, img):
     41         for t in self.transforms:
---> 42             img = t(img)
     43         return img
     44 

/opt/miniconda2/envs/mypython3/lib/python3.6/site-packages/torchvision/transforms/transforms.py in __call__(self, img)
    230 
    231     def __call__(self, img):
--> 232         return self.lambd(img)
    233 
    234 

/opt/miniconda2/envs/mypython3/lib/python3.6/site-packages/torchvision/transforms/transforms.py in <lambda>(img)
    548         if brightness > 0:
    549             brightness_factor = np.random.uniform(max(0, 1 - brightness), 1 + brightness)
--> 550             transforms.append(Lambda(lambda img: F.adjust_brightness(img, brightness_factor)))
    551 
    552         if contrast > 0:

/opt/miniconda2/envs/mypython3/lib/python3.6/site-packages/torchvision/transforms/functional.py in adjust_brightness(img, brightness_factor)
    404 
    405     enhancer = ImageEnhance.Brightness(img)
--> 406     img = enhancer.enhance(brightness_factor)
    407     return img
    408 

/opt/miniconda2/envs/mypython3/lib/python3.6/site-packages/PIL/ImageEnhance.py in enhance(self, factor)
     35         :rtype: :py:class:`~PIL.Image.Image`
     36         """
---> 37         return Image.blend(self.degenerate, self.image, factor)
     38 
     39 

/opt/miniconda2/envs/mypython3/lib/python3.6/site-packages/PIL/Image.py in blend(im1, im2, alpha)
   2558     im1.load()
   2559     im2.load()
-> 2560     return im1._new(core.blend(im1.im, im2.im, alpha))
   2561 
   2562 

ValueError: image has wrong mode

i tried with different modalities and input types but it still doesn't work.

Most helpful comment

ok, this works:

anchors = np.zeros([4*2+1, 4*2+1, 1]).astype(np.uint8)
anchors = transforms.ToPILImage()(anchors)
transforms.ColorJitter(brightness=0.2)(anchors)

But:
https://github.com/pytorch/vision/blob/master/torchvision/transforms/functional.py#L44
outputs the error:
TypeError: pic should be Tensor or ndarray. Got <class 'numpy.ndarray'>.
if the ndarray has more than 3 dimensions, not very useful error

All 4 comments

Hi @lpuglia,

This seems to be from deep down in PIL here

You can only perform these operations of images of type uint8 (8-bit pixels).

Perhaps we should add a check in torchvision as the PIL errors are not very helpful in this case

ok, this works:

anchors = np.zeros([4*2+1, 4*2+1, 1]).astype(np.uint8)
anchors = transforms.ToPILImage()(anchors)
transforms.ColorJitter(brightness=0.2)(anchors)

But:
https://github.com/pytorch/vision/blob/master/torchvision/transforms/functional.py#L44
outputs the error:
TypeError: pic should be Tensor or ndarray. Got <class 'numpy.ndarray'>.
if the ndarray has more than 3 dimensions, not very useful error

Hi @lpuglia,

The input must be 8-bit unsigned ints. So this will work:

anchors = np.zeros([4*2+1, 4*2+1, 1]).astype(np.uint8)
anchors = transforms.ToPILImage()(anchors)
transforms.ColorJitter(brightness=0.2)(anchors)

Closing as this should be the solution. If you're still running into issues feel free to comment on the issue @lpuglia

Was this page helpful?
0 / 5 - 0 ratings