I used my own datasets, but named same as coco.
when i run test_net.py using trained model, but got AP=0
but i saw the scores in segs.json is high.
And then i predict some images, the results seems good.
Could you help me with this problem?
Thanks~
Looks like there is a problem in your validation set.
You can try to visualize your validation data first.
In cocoapi, there is a demo script for it.
Visualize
Looks like there is a problem in your validation set.
You can try to visualize your validation data first.
In cocoapi, there is a demo script for it.
Visualize
I had the same issue and the script you mentioned helped debugging, thanks. In fact, the mistake I made was in the numbering of the annotation objects in my own coco-style dataset. I thought the annotation's "id" must be unique per image, but it seems they have to be unique per data set. This led to use wrong annotations. @mitsuix Maybe good to check for you as well?!
Looks like there is a problem in your validation set.
You can try to visualize your validation data first.
In cocoapi, there is a demo script for it.
Visualize
I've tried the above script, my validation dataset is right, but i still got AP=0.
Any other possible reason?
I have the same problem, any update?
I have the same problem in coco_2017_val,all AP=0。
I also have the same problem, any update?
Most helpful comment
Looks like there is a problem in your validation set.
You can try to visualize your validation data first.
In cocoapi, there is a demo script for it.
Visualize