python tools/test.py configs/faster_rcnn_r50_fpn_1x.py weights/epoch_500.pth --out ./result/result_500.pkl --eval bbox --show
and it show:
loading annotations into memory...
Done (t=0.00s)
creating index...
index created!
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 200/200, 4.5 task/s, elapsed: 45s, ETA: 0s
writing results to ./result/result_500.pkl
Starting evaluate bbox
Loading and preparing results...
DONE (t=0.01s)
creating index...
index created!
Running per image evaluation...
Evaluate annotation type bbox
Traceback (most recent call last):
File "tools/test.py", line 197, in
main()
File "tools/test.py", line 186, in main
coco_eval(result_file, eval_types, dataset.coco)
File "/home/hhm/anaconda3/envs/pytorch/lib/python3.6/site-packages/mmdet-0.6.0+unknown-py3.6.egg/mmdet/core/evaluation/coco_utils.py", line 36, in coco_eval
cocoEval.evaluate()
File "/home/hhm/anaconda3/envs/pytorch/lib/python3.6/site-packages/pycocotools-2.0.0-py3.6-linux-x86_64.egg/pycocotools/cocoeval.py", line 156, in evaluate
for catId in catIds
File "/home/hhm/anaconda3/envs/pytorch/lib/python3.6/site-packages/pycocotools-2.0.0-py3.6-linux-x86_64.egg/pycocotools/cocoeval.py", line 158, in
for imgId in p.imgIds
File "/home/hhm/anaconda3/envs/pytorch/lib/python3.6/site-packages/pycocotools-2.0.0-py3.6-linux-x86_64.egg/pycocotools/cocoeval.py", line 252, in evaluateImg
if g['ignore'] or (g['area']
i can't get the AP and AR,so what is wrong?
The annotation is not correctly converted to COCO-style.
I am getting the same error and I cannot find the error with the converted COCO-style annotation. Here is a sample of my annotation file:
{
"images": [
{
"file_name": "007292.png",
"id": 1,
"width": 1392,
"height": 512
},
{
"file_name": "000603.png",
"id": 2,
"width": 1392,
"height": 512
},
{
"file_name": "004313.png",
"id": 3,
"width": 1392,
"height": 512
},
{
"file_name": "006401.png",
"id": 4,
"width": 1392,
"height": 512
},
{
"file_name": "005442.png",
"id": 5,
"width": 1392,
"height": 512
}
],
"annotations": [
{
"image_id": 1,
"id": 1,
"category_id": 1,
"bbox": [
589.08,
176.53,
26.719999999999914,
26.409999999999997
],
"iscrowd": 0
},
{
"image_id": 1,
"id": 2,
"category_id": 1,
"bbox": [
235.9,
190.63,
115.16,
57.78
],
"iscrowd": 0
},
{
"image_id": 1,
"id": 3,
"category_id": 1,
"bbox": [
426.57,
184.2,
42.0,
26.700000000000017
],
"iscrowd": 0
},
{
"image_id": 2,
"id": 4,
"category_id": 1,
"bbox": [
1211.2,
182.65,
11.799999999999955,
186.35
],
"iscrowd": 0
},
{
"image_id": 2,
"id": 5,
"category_id": 1,
"bbox": [
386.94,
180.98,
57.80000000000001,
30.55000000000001
],
"iscrowd": 0
},
{
"image_id": 2,
"id": 6,
"category_id": 1,
"bbox": [
736.21,
173.49,
113.90999999999997,
96.44999999999999
],
"iscrowd": 0
},
{
"image_id": 2,
"id": 7,
"category_id": 1,
"bbox": [
701.98,
174.7,
91.55999999999995,
66.01000000000002
],
"iscrowd": 0
},
{
"image_id": 2,
"id": 8,
"category_id": 1,
"bbox": [
682.42,
176.25,
58.200000000000045,
47.53
],
"iscrowd": 0
},
{
"image_id": 2,
"id": 9,
"category_id": 1,
"bbox": [
667.8,
175.85,
51.190000000000055,
39.24000000000001
],
"iscrowd": 0
},
{
"image_id": 2,
"id": 10,
"category_id": 1,
"bbox": [
654.6,
176.88,
31.110000000000014,
26.49000000000001
],
"iscrowd": 0
},
{
"image_id": 3,
"id": 11,
"category_id": 1,
"bbox": [
267.69,
179.7,
101.13,
33.120000000000005
],
"iscrowd": 0
},
{
"image_id": 3,
"id": 12,
"category_id": 1,
"bbox": [
461.31,
176.05,
72.38000000000005,
28.73999999999998
],
"iscrowd": 0
},
{
"image_id": 3,
"id": 13,
"category_id": 1,
"bbox": [
600.36,
177.08,
52.360000000000014,
23.299999999999983
],
"iscrowd": 0
},
{
"image_id": 4,
"id": 14,
"category_id": 1,
"bbox": [
1061.94,
96.68,
179.05999999999995,
277.32
],
"iscrowd": 0
},
{
"image_id": 4,
"id": 15,
"category_id": 1,
"bbox": [
280.52,
184.02,
148.01,
96.92999999999998
],
"iscrowd": 0
},
{
"image_id": 4,
"id": 16,
"category_id": 1,
"bbox": [
143.54,
179.75,
350.11,
194.25
],
"iscrowd": 0
},
{
"image_id": 4,
"id": 17,
"category_id": 1,
"bbox": [
861.45,
139.2,
178.20000000000005,
64.58000000000001
],
"iscrowd": 0
},
{
"image_id": 4,
"id": 18,
"category_id": 1,
"bbox": [
1018.27,
144.44,
88.04999999999995,
43.25
],
"iscrowd": 0
},
{
"image_id": 4,
"id": 19,
"category_id": 1,
"bbox": [
1061.23,
147.01,
100.31999999999994,
39.27000000000001
],
"iscrowd": 0
},
{
"image_id": 4,
"id": 20,
"category_id": 1,
"bbox": [
439.12,
184.57,
66.10000000000002,
43.43000000000001
],
"iscrowd": 0
},
{
"image_id": 4,
"id": 21,
"category_id": 1,
"bbox": [
381.68,
184.81,
98.5,
63.59
],
"iscrowd": 0
},
{
"image_id": 4,
"id": 22,
"category_id": 1,
"bbox": [
673.9,
172.28,
52.389999999999986,
36.53999999999999
],
"iscrowd": 0
},
{
"image_id": 4,
"id": 23,
"category_id": 1,
"bbox": [
473.3,
180.94,
49.079999999999984,
36.900000000000006
],
"iscrowd": 0
},
{
"image_id": 4,
"id": 24,
"category_id": 1,
"bbox": [
609.73,
179.26,
35.860000000000014,
27.670000000000016
],
"iscrowd": 0
},
{
"image_id": 4,
"id": 25,
"category_id": 1,
"bbox": [
668.0,
173.81,
88.37,
31.150000000000006
],
"iscrowd": 0
},
{
"image_id": 4,
"id": 26,
"category_id": 1,
"bbox": [
585.17,
172.42,
40.520000000000095,
15.630000000000024
],
"iscrowd": 0
},
{
"image_id": 5,
"id": 27,
"category_id": 1,
"bbox": [
192.88,
178.88,
74.23000000000002,
33.49000000000001
],
"iscrowd": 0
},
{
"image_id": 5,
"id": 28,
"category_id": 1,
"bbox": [
250.68,
179.92,
65.70999999999998,
26.27000000000001
],
"iscrowd": 0
},
{
"image_id": 5,
"id": 29,
"category_id": 1,
"bbox": [
306.54,
178.95,
55.48999999999995,
22.670000000000016
],
"iscrowd": 0
}
],
"categories": [
{
"name": "Car",
"id": 1
}
]
}
@pittyacg did you find the solution to your problem, I have a very similar annotation file without "area" field. I don't have this "area" field for my images.
How to solve this issue?
Just add the 'area'(=width*height) key for every image.
According to the description of coco dataset, 'area' key represents the area covered by pixels belonging to segmented object. It has nothing to do with the 'detection' annotations. So why do we need 'area' term for detection task ?
Most helpful comment
According to the description of coco dataset, 'area' key represents the area covered by pixels belonging to segmented object. It has nothing to do with the 'detection' annotations. So why do we need 'area' term for detection task ?