Re-open the case for #513. i.e. Support nvidia dali iterator as a possible data loader.
I have submitted a wip pull request to the master. #789
I would like to know if it is ok for the test to be depending on nvidia-dali in addition to apex.
The dali iterator does not support resetting while epoch is not finished. I suppose I shall provide a warning for that.
@smallzzy this is sick. super excited by this feature
@smallzzy we have cleared the API and now it should be stable... Mind resume your addition?
Some news about DALI support?
@brunoalano interested in implementing DALI?
@Borda Do you have a pipeline what should have be done to support it? But I'm available to do that with a minimal guidance (started using PyTorch Lightning and DALI these days)
we had almost done PR some time ago so I use you can resume from that point... #789
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Is anyone working on this currently?
I tried converting PyTorch Lightning's MNIST example to use DALI, it seems to work out of the box without requiring modification to the PyTorch Lightning's internals. See below.
https://gist.github.com/irustandi/3d180ed3ec9d7ff4e73d3fdbd67df3ca
@irustandi cool, mind add it as example, send a PR?
@Borda sure, will create the PR.
This issue has been automatically marked as stale because it hasn't had any recent activity. This issue will be closed in 7 days if no further activity occurs. Thank you for your contributions, Pytorch Lightning Team!
I want to know that if dali with ddp and amp and dali's pipeline in distributed situation can work well? And have a perfect performance?
Most helpful comment
@smallzzy this is sick. super excited by this feature