I have batches of images with size of (B,C,H,W) (torch.tensor) i want to unnormalized it with mean/std tensor (e.g.:[0.4,0.3,0.2]) with size of(C,H,W)
My question is is there any operations in Pytorch can do this likedataset.map or tf.expand_dims in Tensorflow ?
Be careful ,it not PIL image format
You can achieve that by adding a batch dimension and let the broadcasting mechanics handle it for you:
mean, std = mean.unsqueeze(0), std.unsqueeze(0) # 1xCxHxW
y = x * std + mean
Why do your normalisation tensors (mean and std) have spatial dimensions (H and W)? This would indicate that the mean and std change depending on the position in the image, which is quite unusual. I'm guessing you want to do a per-channel normalisation, which makes mean and std 1D tensors of size C.
To follow up on you example:
device, dtype = x.device, x.dtype
mean = torch.tensor([0.4, 0.3, 0.2], device=device, dtype=dtype).view(1, -1, 1, 1)
std = torch.tensor([0.4, 0.3, 0.2], device=device, dtype=dtype).view(1, -1, 1, 1)
y = x * std + mean
If you want to have the denormalisation as a torchvision transform have a look at this comment from a similar issue.
Thank you so much!
Most helpful comment
You can achieve that by adding a batch dimension and let the broadcasting mechanics handle it for you:
Why do your normalisation tensors (
meanandstd) have spatial dimensions (HandW)? This would indicate that themeanandstdchange depending on the position in the image, which is quite unusual. I'm guessing you want to do a per-channel normalisation, which makesmeanandstd1D tensors of sizeC.To follow up on you example:
If you want to have the denormalisation as a
torchvisiontransform have a look at this comment from a similar issue.