Hi everyone, I encountered a bug when using the average_precision metric (pytorch_lightning.metrics.functional.classification). It yields incorrect results (negative ones).
There seems to be a missing parenthesis in the code here :
It works when corrected as :
return -torch.sum((recall[1:] - recall[:-1]) * precision[:-1])
In order to reproduce negative results :
import torch
import pytorch_lightning.metrics.functional.classification as M
torch.manual_seed(23)
truth = (torch.rand(100) > .6)
pred = torch.rand(100)
M.average_precision(pred, truth)
I did not find an issue on this topic yet. If needed I can submit a PR.
Thanks :relaxed:
Hi! thanks for your contribution!, great first issue!
I'd like to fix it
Hi, I have already created a branch for the PR !
Thanks for the issue and the fix! @InCogNiTo124 Just saying- there are many other open issues if you want to take a stab at :)
Most helpful comment
Hi, I have already created a branch for the PR !