when working with RGBA images, the adjust_hue function does not seem to respect the alpha channel and imposes a max_value of 255 after the transform. Taking a quick look at the source code, it seems that the image is immediately converted to HSV without retaining the alpha channel. It should be a quick fix to retain the alpha channel and include it when merging back into RGBA.
Steps to reproduce the behavior:
img = Image.open('xyz.png')
img_ = adjust_hue(img, 0.1)
print(np.array(img.split()[-1]).mean())
print(np.array(img_.split()[-1]).mean())
cc @vfdev-5
@jamespltan Thanks for reporting this.
You are right to say that the adjust_hue() method does not currently support transparency (neither on the PIL nor on the Tensor backend). The same applies for other transformations and unfortunately the documentation does not mention those limitations (cc @voldemortX).
Extending support and providing a proper and consistent solution across both backends is not trivial and will require storing additional meta-data about the images (for example their mode in case of tensors). This definitely requires additional discussion, so before tackling that "beast" I think it would be best to improve the current situation by:
Any contribution that addresses the above 3 points would be highly appreciated.
Yeah I forgot to mention that in the doc...
Maybe I can do the listed 3 points tomorrow to make up for it.
That would be awesome @voldemortX, thank you. There might be more limitations on the adjust_* methods that we don't clearly mention.
Many thanks my dudes 馃檹
Most helpful comment
Many thanks my dudes 馃檹