I would like to know if it's possible to train a model with multiprocess parallelism (no GPU available) using Lightning (sync analogue of https://pytorch.org/docs/stable/notes/multiprocessing.html#hogwild) ? After a quick glance, I've the impression that in Trainer all available options for parallelism are GPU based (if I'm not mistaken torch.DPD supports multiproc CPU-only training).
good question. we only support distributed GPU but would welcome a PR to support multi-CPU options.
Hah! I was just looking for this. If nothing else, it would make testing DDP much easier.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
@artemru @neggert are you interested in sending a PR? :robot:
I would like to take this up.
Cool! Thx @skepticleo
Looks like this was recently added in #1158 :)