Support fill color other than 0 for tensor affine transforms (rotate, affine).
It is important for pixel-wise tasks (e.g. segmentation) to have different fill color for ignored label (e.g. 255).
Different fill colors are supported for tensors like already supported for PIL images in affine transformations.
Specifically, for rotate() and affine() in transforms/functional_tensor.py.
None at the moment.
The requested feature was not supported in the recently completed tensor/PIL unification process #2292 probably due to pytorch grid_sampler(), which is what the implementations based on, does not seem to support fill colors other than 0.
@vfdev-5
FYI it is possible to let torchvision implementation support different fill colors than 0 despite grid_sample limitations, by performing grid_sample twice (first on the image, second on a dummy mask), so that we can mask out the fill values and replace them with the user-input.
Thanks! it could be a good enough workaround for now.
@voldemortX following @fmassa 's idea, it could be also possible to append this data mask to the input image as additional channel and apply grid_sample once. We think that such solution can be added to torchvision. If you would like to work on that, it would be helpful 馃憤 .
I can try it out, but thinking more on it i'm a bit confused right now. shouldn't the easiest way be replacing the 0 afterwards?
shouldn't the easiest way be replacing the 0 afterwards?
What do you mean by replacing the 0 ?
Oh right, I understand now, the original image could have 0.
I can try it tomorrow or something (UTC-8), but I have not contributed any code before, is unit tests required for a patch like this?
I can try it tomorrow or something (UTC-8), but I have not contributed any code before, is unit tests required for a patch like this?
Sounds good ! Let me describe how I'd do that and put some links on the code to modify a bit later. Yes, tests are required. To start contributing, please read this draft CONTRIBUTING guide.
Btw, if something is unclear in the guide feel free to comment out the PR. Thanks
EDIT:
What to do:
torch.ones_like(img)) to input image when fillcolor > 0 before this line: https://github.com/pytorch/vision/blob/master/torchvision/transforms/functional_tensor.py#L977_apply_grid_transform and depending on resample we had to extract transformed mask: if nearest then mask is OK, if bilinear then mask should be binarized (e.g. with 0.5 threshold). The last step we apply non-zero fill-value to the transformed image outside the mask (if fillcolor > 0).I can try it tomorrow or something (UTC-8), but I have not contributed any code before, is unit tests required for a patch like this?
Sounds good ! Let me describe how I'd do that and put some links on the code to modify a bit later. Yes, tests are required. To start contributing, please read this draft CONTRIBUTING guide.
Btw, if something is unclear in the guide feel free to comment out the PR. Thanks
Ok, thanks.
Said feature is now supported.