the output is like this
2019-04-27 20:59:25,194 - INFO - Epoch [3][50/739] lr: 0.00500, eta: 1:38:23, time: 0.804, data_time: 0.006, memory: 4766, loss_rpn_cls: 0.2377, loss_rpn_reg: 0.0266, loss_cls: 0.2489, acc: 95.7383, loss_reg: 0.0717, loss: 0.5849
2019-04-27 21:00:06,344 - INFO - Epoch [3][100/739] lr: 0.00500, eta: 1:38:51, time: 0.823, data_time: 0.004, memory: 4766, loss_rpn_cls: 0.1876, loss_rpn_reg: 0.0253, loss_cls: 0.2461, acc: 95.7832, loss_reg: 0.0774, loss: 0.5364
2019-04-27 21:00:44,879 - INFO - Epoch [3][150/739] lr: 0.00500, eta: 1:36:27, time: 0.771, data_time: 0.004, memory: 4766, loss_rpn_cls: 0.2370, loss_rpn_reg: 0.0300, loss_cls: 0.2372, acc: 96.1895, loss_reg: 0.0669, loss: 0.5711
2019-04-27 21:01:24,965 - INFO - Epoch [3][200/739] lr: 0.00500, eta: 1:35:51, time: 0.802, data_time: 0.004, memory: 4766, loss_rpn_cls: 0.2715, loss_rpn_reg: 0.0331, loss_cls: 0.2465, acc: 95.7871, loss_reg: 0.0692, loss: 0.6204
2019-04-27 21:02:05,276 - INFO - Epoch [3][250/739] lr: 0.00500, eta: 1:35:20, time: 0.806, data_time: 0.004, memory: 4766, loss_rpn_cls: 0.2405, loss_rpn_reg: 0.0307, loss_cls: 0.2050, acc: 96.9707, loss_reg: 0.0429, loss: 0.5191
2019-04-27 21:02:45,734 - INFO - Epoch [3][300/739] lr: 0.00500, eta: 1:34:49, time: 0.809, data_time: 0.004, memory: 4766, loss_rpn_cls: 0.2276, loss_rpn_reg: 0.0259, loss_cls: 0.1788, acc: 97.0605, loss_reg: 0.0427, loss: 0.4750
2019-04-27 21:03:27,550 - INFO - Epoch [3][350/739] lr: 0.00500, eta: 1:34:43, time: 0.836, data_time: 0.004, memory: 4766, loss_rpn_cls: 0.2213, loss_rpn_reg: 0.0276, loss_cls: 0.1854, acc: 96.9688, loss_reg: 0.0463, loss: 0.4806
2019-04-27 21:04:09,369 - INFO - Epoch [3][400/739] lr: 0.00500, eta: 1:34:28, time: 0.836, data_time: 0.004, memory: 4766, loss_rpn_cls: 0.2653, loss_rpn_reg: 0.0358, loss_cls: 0.2265, acc: 96.2910, loss_reg: 0.0537, loss: 0.5812
2019-04-27 21:04:51,090 - INFO - Epoch [3][450/739] lr: 0.00500, eta: 1:34:06, time: 0.834, data_time: 0.004, memory: 4766, loss_rpn_cls: 0.2150, loss_rpn_reg: 0.0291, loss_cls: 0.2113, acc: 96.4609, loss_reg: 0.0610, loss: 0.5164
2019-04-27 21:05:31,534 - INFO - Epoch [3][500/739] lr: 0.00500, eta: 1:33:22, time: 0.809, data_time: 0.004, memory: 4766, loss_rpn_cls: 0.2696, loss_rpn_reg: 0.0381, loss_cls: 0.2426, acc: 96.0098, loss_reg: 0.0611, loss: 0.6115
2019-04-27 21:06:12,496 - INFO - Epoch [3][550/739] lr: 0.00500, eta: 1:32:45, time: 0.819, data_time: 0.004, memory: 4766, loss_rpn_cls: 0.2356, loss_rpn_reg: 0.0326, loss_cls: 0.2189, acc: 96.4922, loss_reg: 0.0539, loss: 0.5409
2019-04-27 21:06:53,434 - INFO - Epoch [3][600/739] lr: 0.00500, eta: 1:32:07, time: 0.819, data_time: 0.004, memory: 4766, loss_rpn_cls: 0.2374, loss_rpn_reg: 0.0272, loss_cls: 0.2326, acc: 96.0957, loss_reg: 0.0632, loss: 0.5604
2019-04-27 21:07:34,947 - INFO - Epoch [3][650/739] lr: 0.00500, eta: 1:31:35, time: 0.830, data_time: 0.004, memory: 4766, loss_rpn_cls: 0.2571, loss_rpn_reg: 0.0349, loss_cls: 0.2513, acc: 95.9023, loss_reg: 0.0632, loss: 0.6065
2019-04-27 21:08:17,239 - INFO - Epoch [3][700/739] lr: 0.00500, eta: 1:31:09, time: 0.846, data_time: 0.004, memory: 4766, loss_rpn_cls: 0.2537, loss_rpn_reg: 0.0303, loss_cls: 0.2292, acc: 96.3457, loss_reg: 0.0563, loss: 0.5695
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 158/158, 6.5 task/s, elapsed: 24s, ETA: 0s
+--------------------+-----+------+--------+-----------+-------+
| class | gts | dets | recall | precision | ap |
+--------------------+-----+------+--------+-----------+-------+
| aeroplane | 136 | 0 | 0.000 | 0.000 | 0.000 |
| ship | 43 | 0 | 0.000 | 0.000 | 0.000 |
| storage_tank | 75 | 0 | 0.000 | 0.000 | 0.000 |
| baseball_diamond | 125 | 0 | 0.000 | 0.000 | 0.000 |
| tennis_court | 86 | 0 | 0.000 | 0.000 | 0.000 |
| basketball_court | 39 | 0 | 0.000 | 0.000 | 0.000 |
| ground_track_field | 33 | 0 | 0.000 | 0.000 | 0.000 |
| harbor | 53 | 0 | 0.000 | 0.000 | 0.000 |
| bridge | 26 | 0 | 0.000 | 0.000 | 0.000 |
| vehicle | 43 | 0 | 0.000 | 0.000 | 0.000 |
+--------------------+-----+------+--------+-----------+-------+
| mAP | | | | | 0.000 |
+--------------------+-----+------+--------+-----------+-------+
I've modified the classed in voc.py and the classes shown above are my classes. Can you tell me how to fix this problem? I'm a new fish and please forgive me if it's a stupid question. Thank you!
The loss seems not to decrease. You may first check the dataset.
may I ask which files should i modify besides voc.py and the config files ?
I found that the loss did decrease. But it stopped at around 0.5.
hello, i train on voc dataset too.Before training, i modified voc.py,and training goes well.
However,when i tested on voc dataset, some error happened.Official tutorials:
perform evaluation after testing, add --eval <EVAL_TYPES>. Supported types are:
[proposal_fast, proposal, bbox, segm, keypoints].proposal_fast denotes evaluating proposal recalls with our own implementation,others denote evaluating the corresponding metric with the official coco api.
Thus, i think --eval bbox should be used on coco dataset. Now i want to test on voc dataset, do i need to
modify which files? I have readed voc_eval.py, how can i use this file? thank you very much!
thank you ! i have solved my question.
how to solved your question,can you tell me.thanks!
how to solved your question,can you tell me.thanks!
We can not use python tools/test.py to eval voc dataset, becase this script only support coco_eval.
we shoud use python tools/voc_eval.py results.pkl
it is solved,but i trained my datasets,but voc_eval result is like this, i want know why
| class | gts | dets | recall | precision | ap |
+-----------+------+------+--------+-----------+-------+
| aeroplane | 1305 | 2150 | 0.988 | 0.600 | 0.900 |
| bicycle | 738 | 1302 | 0.992 | 0.562 | 0.871 |
| bird | 1253 | 1996 | 0.977 | 0.613 | 0.894 |
| boat | 751 | 1337 | 0.955 | 0.536 | 0.878 |
| bottle | 588 | 926 | 0.986 | 0.626 | 0.903 |
| bus | 597 | 1071 | 0.975 | 0.543 | 0.883 |
| car | 611 | 942 | 0.993 | 0.644 | 0.900 |
+-----------+------+------+--------+-----------+-------+
| mAP | | | | | 0.890 |
+-----------+------+------+--------+-----------+-------+
what why?your result looks good
what why?your result looks good
我的意思是我用的数据集是自己的,是关于饮料识别的,为什么训练出来测试还是飞机、自行车这些类别的呢,我有更改过voc.py,data下面的也更换过数据集,所以这并不合理。
原来是说这个呀,这个是装mmdetection时产生的问题,想必你装时,用的是python setup.py install或者是pip install .,这样的话,当你直接修改voc.py后,需要再重新编译,就是再次运行python setup.py install。
我再次运行python setup.py之后,结果训练以后得模型还是一样测出来的是飞机等类别,还有什么其他的方法吗?> 原来是说这个呀,这个是装mmdetection时产生的问题,想必你装时,用的是python setup.py install或者是pip install .,这样的话,当你直接修改voc.py后,需要再重新编译,就是再次运行python setup.py install。
哇,谢谢老哥,我先试试,还是用中文交流舒服,我英文贼烂
原来是说这个呀,这个是装mmdetection时产生的问题,想必你装时,用的是python setup.py install或者是pip install .,这样的话,当你直接修改voc.py后,需要再重新编译,就是再次运行python setup.py install。
我再次运行python setup.py之后,结果训练以后得模型还是一样测出来的是飞机等类别,还有什么其他的方法吗?
The loss seems not to decrease. You may first check the dataset.
The loss seems not to decrease. You may first check the dataset.
我用的数据集是自己的,是关于饮料识别的,为什么训练出来测试还是飞机、自行车这些类别的呢,我有更改过voc.py,data下面的也更换过数据集,所以这并不合理。
The loss seems not to decrease. You may first check the dataset.
The loss seems not to decrease. You may first check the dataset.
我用的数据集是自己的,是关于饮料识别的,为什么训练出来测试还是飞机、自行车这些类别的呢,我有更改过voc.py,data下面的也更换过数据集,所以这并不合理。
maybe last build have not deleted clearly. maybe you should reinstall mmdetection again. Notice:
You can run python(3) setup.py develop or pip install -e . to install mmdetection if you want to make modifications to it frequently. in INSTALL.md
The loss seems not to decrease. You may first check the dataset.
The loss seems not to decrease. You may first check the dataset.
我用的数据集是自己的,是关于饮料识别的,为什么训练出来测试还是飞机、自行车这些类别的呢,我有更改过voc.py,data下面的也更换过数据集,所以这并不合理。
maybe last build have not deleted clearly. maybe you should reinstall mmdetection again. Notice:
You can run python(3) setup.py develop or pip install -e . to install mmdetection if you want to make modifications to it frequently. in INSTALL.md
那我需要吧之前的有关mmdetection的东西都删了吗
The loss seems not to decrease. You may first check the dataset.
The loss seems not to decrease. You may first check the dataset.
我用的数据集是自己的,是关于饮料识别的,为什么训练出来测试还是飞机、自行车这些类别的呢,我有更改过voc.py,data下面的也更换过数据集,所以这并不合理。
maybe last build have not deleted clearly. maybe you should reinstall mmdetection again. Notice:
You can run python(3) setup.py develop or pip install -e . to install mmdetection if you want to make modifications to it frequently. in INSTALL.md那我需要吧之前的有关mmdetection的东西都删了吗
yes, this indeed is a kind of solution. rebuild whole mmdetection
hello, i train on voc dataset too.Before training, i modified voc.py,and training goes well.
However,when i tested on voc dataset, some error happened.Official tutorials:
perform evaluation after testing, add--eval <EVAL_TYPES>. Supported types are:
[proposal_fast, proposal, bbox, segm, keypoints].proposal_fastdenotes evaluating proposal recalls with our own implementation,others denote evaluating the corresponding metric with the official coco api.
Thus, i think --eval bbox should be used on coco dataset. Now i want to test on voc dataset, do i need to
modify which files? I have readed voc_eval.py, how can i use this file? thank you very much!
你好,可以问一下--eval
The loss seems not to decrease. You may first check the dataset.
The loss seems not to decrease. You may first check the dataset.
我用的数据集是自己的,是关于饮料识别的,为什么训练出来测试还是飞机、自行车这些类别的呢,我有更改过voc.py,data下面的也更换过数据集,所以这并不合理。
我也出现了这个问题,老哥之前解决了吗?
I have the same issue and my classes in the map table are correct but still the numbers are all 0. any other thoughts on how to fix this?
I found that the loss did decrease. But it stopped at around 0.5.
did you ever solve your issue?
@flyfj I have met the same issue, AP and mAP are all 0. Did you fix this?
Most helpful comment
We can not use python tools/test.py to eval voc dataset, becase this script only support coco_eval. to eval. Of course we firstly run python tools/test.py to get results.pkl with no argument --eval.
we shoud use python tools/voc_eval.py results.pkl