Pytorch-lightning: Feature request: Add Mean Average Precision (mAP) metric

Created on 8 Jul 2020  路  8Comments  路  Source: PyTorchLightning/pytorch-lightning

The main metric for object detection tasks is the Mean Average Precision, implemented in PyTorch, and computed on GPU.

It would be nice to add it to the collection of the metrics.

The example implementation using numpy:

https://github.com/ternaus/iglovikov_helper_functions/blob/master/iglovikov_helper_functions/metrics/map.py

Hacktoberfest Metrics enhancement good first issue help wanted

Most helpful comment

Hello! First time contributor here. I'd like to take a shot at this!

All 8 comments

Hello! First time contributor here. I'd like to take a shot at this!

would be great to have that metric!

Not yet sure yet, but it looks like AveragePrecision (https://pytorch-lightning.readthedocs.io/en/stable/metrics.html) seems to be acting like mAP. but they didn't add "Mean" to the name?

I recently managed to implement a minimal version of _Kaggle's_ Mean Average Precision metric. The difference is in the calculation itself. You can find the details here:

https://www.kaggle.com/c/global-wheat-detection/overview/evaluation

My kernel: pytorch-mean-absolute-precision-calculation

With slight modifications in the formula of mAP, I suppose we would be able to integrate this metric in pytorch-lightning (since we already have the average-precision implemented)

I've already written a logic to map predicted boxes to ground truth ones (taking their respective scores into consideration) so have a look at the kernel and let me know if you found any issues.

This issue has been automatically marked as stale because it hasn't had any recent activity. This issue will be closed in 7 days if no further activity occurs. Thank you for your contributions, Pytorch Lightning Team!

Hi, I would love to take this on. @SkafteNicki could you assign it to me?

@briankosw thanks you for wanting to contribute!
Please ping in your PR.

We need this! 馃ぉ

For PyTorch lightning modules focused on multi-class classification, would be useful to obtain mAP metric over the test_dataloader data samples in a standard way that works with DDP with minor changes to the architecture. Because this metric is commonly used to conclude about the representation learning side of the deep learning architectures without using any thresholds or max functions to take hard decisions on the predictions.

Most classifiers have a pair of Fully-connected layers that do a classwise likelihood prediction. The output of the last layer before the FC layers could be used for a retrieval task over the test dataset and mAP could be calculated over this.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

srush picture srush  路  3Comments

anthonytec2 picture anthonytec2  路  3Comments

versatran01 picture versatran01  路  3Comments

williamFalcon picture williamFalcon  路  3Comments

williamFalcon picture williamFalcon  路  3Comments