Pytorch-lightning: Bug in average_precision Metric

Created on 22 Jun 2020  ·  4Comments  ·  Source: PyTorchLightning/pytorch-lightning

🐛 Bug

Hi everyone, I encountered a bug when using the average_precision metric (pytorch_lightning.metrics.functional.classification). It yields incorrect results (negative ones).

There seems to be a missing parenthesis in the code here :

https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pytorch_lightning/metrics/functional/classification.py#L847

It works when corrected as :

return -torch.sum((recall[1:] - recall[:-1]) * precision[:-1])

In order to reproduce negative results :

import torch
import pytorch_lightning.metrics.functional.classification as M

torch.manual_seed(23)
truth = (torch.rand(100) > .6)
pred = torch.rand(100)

M.average_precision(pred, truth)

I did not find an issue on this topic yet. If needed I can submit a PR.

Thanks :relaxed:

bug / fix help wanted

Most helpful comment

Hi, I have already created a branch for the PR !

All 4 comments

Hi! thanks for your contribution!, great first issue!

I'd like to fix it

Hi, I have already created a branch for the PR !

Thanks for the issue and the fix! @InCogNiTo124 Just saying- there are many other open issues if you want to take a stab at :)

Was this page helpful?
0 / 5 - 0 ratings

Related issues

edenlightning picture edenlightning  ·  3Comments

versatran01 picture versatran01  ·  3Comments

Vichoko picture Vichoko  ·  3Comments

maxime-louis picture maxime-louis  ·  3Comments

as754770178 picture as754770178  ·  3Comments