Pytorch-lightning: AttributeError when using multiple dataloaders

Created on 8 Oct 2020  ·  3Comments  ·  Source: PyTorchLightning/pytorch-lightning

🐛 Bug

When training with multiple datasets and using self.log() for logging, I get following AttributeError:

AttributeError: 'dict' object has no attribute 'get_epoch_log_metrics'

Proposed fix

After some code digging, I suppose this line is the culprit. I'm guessing it should be replaced with:

result = Result()
result.update({
    f'{k}/dataloader_idx_{dataloader_idx}': v
    for k, v in metrics.items()
})

or something similar.

Most helpful comment

I ran into the same issue and came to the same conclusion about the culprit. It's returning a dictwhere a Resultis needed. I'm not sure whether making a new Resultis sufficient, or if it needs the other metadata too.

I took a crack at it in that PR ^, but I'm still extremely new to the codebase, so I have very little confidence in it.

All 3 comments

Hi! thanks for your contribution!, great first issue!

I ran into the same issue and came to the same conclusion about the culprit. It's returning a dictwhere a Resultis needed. I'm not sure whether making a new Resultis sufficient, or if it needs the other metadata too.

I took a crack at it in that PR ^, but I'm still extremely new to the codebase, so I have very little confidence in it.

This is a major bug and a strange behavior overall. Why pl is even renaming my metrics when I explicitly set them when I'm logging

Was this page helpful?
0 / 5 - 0 ratings

Related issues

jcreinhold picture jcreinhold  ·  3Comments

as754770178 picture as754770178  ·  3Comments

anthonytec2 picture anthonytec2  ·  3Comments

srush picture srush  ·  3Comments

monney picture monney  ·  3Comments