Pytorch-lightning: A question about metrics on ddp

Created on 28 Jun 2020  ·  15Comments  ·  Source: PyTorchLightning/pytorch-lightning

Thanks for the nice work! 🥳🥳🥳

Backgroud

The new metrics package is released with build-in ddp support.
DistributedSampler is used by dataloaders to work with ddp.
It adds extra samples to make the samples evenly divisible.
This means during evaluation and testing, the added extra samples affect the eval result.

Question

Does the metrics package remove the extra redundant samples to make sure the eval result is not affected?

I have looked into the code and find the answer is NO.
Did I miss something or it's just not been resolved for now?

Metrics discussion question

Most helpful comment

@zerogerc we are aware of that problem, and I have an upcoming PR that will fix that issue by using SUM to sync metric states (for accuracy tps and sups) which afterwards gets divided

All 15 comments

Hi @haichao592 , AFAIK the DistributedSampler simply does not behave like this. It only divides the dataset into disjunct subsets and does not add any extra samples. This is also the reason, we haven't implemented this.

@justusschock, the DistributedSampler actually adds samples from the start of the dataset to make the number of examples evenly divisible:

        # add extra samples to make it evenly divisible
        indices += indices[:(self.total_size - len(indices))] 

Reference: https://github.com/pytorch/pytorch/blob/master/torch/utils/data/distributed.py#L79

Okay, thanks @zerogerc learned something new :)

But to answer the question here: No we don't over that and I also think we can't do that so easily. Because we don't know which samples that are (there may also be shuffling) and we also don't know which batch you currently feed in your metric. So far this is only a module that calculates and syncs with some sanity checks. The more advanced features are yet to come, but I'm not sure, if this is possible :)

@SkafteNicki do you think, we can add this feature? Probably not without knowing the index and removing duplicates, which we won't do, right?

Hi @justusschock, just my thoughts on this matter.

I believe that even divisibility is only critical for backward propagation and gradient calculation. I think that writing a distributed sampler which does not add additional samples would solve the problem for the metrics calculation.

I was neither aware of this, and cannot really understand why this is default behavior of pytorch (at least without any kind of warning).
I guess that the only way to prevent this would be to write a custom DistributedSampler that corrects this.

@SkafteNicki @zerogerc when you think about it, this makes sense (and can not be changed that easily without dropping the last batches), since you cannot sync/reduce something that is not available on some of the processes (which would be the case if you have a not evenly divisible number of batches/samples.

@justusschock for something like that to work, a torch.distributed.barrier would be needed before any reduction?

@justusschock, @SkafteNicki
One more concern.

I think that custom sampler would break the MEAN reduce operation for metrics. For example, accuracy cannot be correctly reduced if each process handled the different number of examples. For correct reduction, each process should calculate the number of handled examples. Another approach is to return tps and sups and always use SUM operation to reduce ddp.

@zerogerc we are aware of that problem, and I have an upcoming PR that will fix that issue by using SUM to sync metric states (for accuracy tps and sups) which afterwards gets divided

This problem was also discussed in pytorch (https://github.com/pytorch/pytorch/issues/22584) with the solution that the only way to do this would be to keep track of the sample ids and then filter the computations based on that.

Interestingly enough this is the very reason the official imagenet example only adds distributed sampler to their train dataloader and not their validation dataloader (https://github.com/pytorch/examples/blob/49ec0bd72b85be55579ae8ceb278c66145f593e1/imagenet/main.py#L216-L233)

I think that tracking of sample ids and filtering would add significant overhead to validation step.

Judging from the implementation of DistributedDataParallel the computation only hangs if the gradient computation is required, so the uneven distribution should work. Needs testing, though.

https://github.com/pytorch/pytorch/blob/master/torch/nn/parallel/distributed.py#L501

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

Judging from the implementation of DistributedDataParallel the computation only hangs if the gradient computation is required, so the uneven distribution should work. Needs testing, though.

@zerogerc did you test it?

This issue has been automatically marked as stale because it hasn't had any recent activity. This issue will be closed in 7 days if no further activity occurs. Thank you for your contributions, Pytorch Lightning Team!

Was this page helpful?
0 / 5 - 0 ratings

Related issues

iakremnev picture iakremnev  ·  3Comments

edenlightning picture edenlightning  ·  3Comments

williamFalcon picture williamFalcon  ·  3Comments

mmsamiei picture mmsamiei  ·  3Comments

as754770178 picture as754770178  ·  3Comments