I used this work (just a mock of coco.py)and fed my own medical image data into the model,
the train and validation process looks good with a valid loss,
but the test result is terrible, and the class is always 1 with a high confidence.
Anyone has ideas to how to debug? I don't know how to see the result when training, which only indicates the losses.
Thanks for your reply.
Start by verifying that your Dataset class is generating the data correctly. Use the inspect_data notebook to visualize the images and verify that the code is generating the correct training targets.
If that's all okay, then use the inspect_model notebook to visualize the prediction process step by step and that should help you locate where the error is happening.
@wadmes This problem also occurs to me. And I found it calculates confidence as P(class|object), which doesn't take objectness into account. I revised some part of model.py(https://github.com/keineahnung2345/Mask_RCNN/commit/dfdfc7888ff7b74efd624082363a8eafff6ae043) and it calculate confidence as P(class|object)*P(object).
After this modification, it gives me a lower false positive rate.
Most helpful comment
Start by verifying that your Dataset class is generating the data correctly. Use the
inspect_datanotebook to visualize the images and verify that the code is generating the correct training targets.If that's all okay, then use the
inspect_modelnotebook to visualize the prediction process step by step and that should help you locate where the error is happening.