Pytorch-lightning: Bug in average_precision Metric

Created on 22 Jun 2020  路  4Comments  路  Source: PyTorchLightning/pytorch-lightning

馃悰 Bug

Hi everyone, I encountered a bug when using the average_precision metric (pytorch_lightning.metrics.functional.classification). It yields incorrect results (negative ones).

There seems to be a missing parenthesis in the code here :

https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pytorch_lightning/metrics/functional/classification.py#L847

It works when corrected as :

return -torch.sum((recall[1:] - recall[:-1]) * precision[:-1])

In order to reproduce negative results :

import torch
import pytorch_lightning.metrics.functional.classification as M

torch.manual_seed(23)
truth = (torch.rand(100) > .6)
pred = torch.rand(100)

M.average_precision(pred, truth)

I did not find an issue on this topic yet. If needed I can submit a PR.

Thanks :relaxed:

bug / fix help wanted

Most helpful comment

Hi, I have already created a branch for the PR !

All 4 comments

Hi! thanks for your contribution!, great first issue!

I'd like to fix it

Hi, I have already created a branch for the PR !

Thanks for the issue and the fix! @InCogNiTo124 Just saying- there are many other open issues if you want to take a stab at :)

Was this page helpful?
0 / 5 - 0 ratings

Related issues

DavidRuhe picture DavidRuhe  路  3Comments

anthonytec2 picture anthonytec2  路  3Comments

Vichoko picture Vichoko  路  3Comments

polars05 picture polars05  路  3Comments

edenlightning picture edenlightning  路  3Comments