Models: Change IoU threshold for mAP calculation when using the Object detection API for evaluation.

Created on 28 Nov 2018  路  6Comments  路  Source: tensorflow/models

System information

  • Have I written custom code:
    No
  • OS Platform and Distribution :
    Ubuntu 16.06
  • TensorFlow installed from :
    pip install tensorflow-cpu==1.5.1
  • TensorFlow version:
    1.5.1
  • Bazel version:
    N/A
  • CUDA/cuDNN version:
    9.0
  • GPU model and memory:
    N/A
  • Exact command to reproduce :
python object_detection/eval.py \
    --logtostderr \
    --pipeline_config_path=object_detection/training/faster_rcnn_inception_v2_pets.config \
    --checkpoint_dir=object_detection/training/ \
    --eval_dir=object_detection/training/

Describe the problem

After scouring the documentation, github and SO , I cannot find clear institutions on how to change the IoU threshold for evaluation. This is set at 0.5. How would one go about changing the same?

I noticed utils/object_detection_evaluation is what controls the metrics calculation. Do i change up all the iou values in all the metrics here? Also when running eval.py, by default are all the metrics there used to calcualte mAP?

I am not explicitly defining any metric in the config file.

Edit:
I changed up the threshold from 0.5 to different values like 0.9 and 0.6 in utils/object_detection_evaluation.py . I also tried setting the metric to pascalvoc in my confic file using the following metrics_set:"pascal_voc_detection_metrics" . Both of these changes do not show any change in the results in tensorboard.

```
eval.config below
eval_config: {
num_examples: 800
max_evals: 1
metrics_set:"pascal_voc_detection_metrics"
}

research support

Most helpful comment

hi @TitusTom @haichaoyu , maybe you can try like this. add matching_iou_threshold field in eval.proto, regenerate pb2.py with protoc. then add branch for your change in eval_util.py:

for eval_metric_fn_key in eval_metric_fn_keys:
    if eval_metric_fn_key in ('coco_detection_metrics', 'coco_mask_metrics'):
      evaluator_options[eval_metric_fn_key] = {
          'include_metrics_per_category': (
              eval_config.include_metrics_per_category)
      }
    elif eval_metric_fn_key == 'precision_at_recall_detection_metrics':
      evaluator_options[eval_metric_fn_key] = {
          'recall_lower_bound': (eval_config.recall_lower_bound),
          'recall_upper_bound': (eval_config.recall_upper_bound)
      }
    elif eval_metric_fn_key == 'pascal_voc_detection_metrics':
      evaluator_options[eval_metric_fn_key] = {
        'matching_iou_threshold': (eval_config.matching_iou_threshold)
      }

All 6 comments

Thank you for your post. We noticed you have not filled out the following field in the issue template. Could you update them if they are relevant in your case, or leave them as N/A? Thanks.
What is the top-level directory of the model you are using
Have I written custom code
OS Platform and Distribution
TensorFlow installed from
TensorFlow version
Bazel version
CUDA/cuDNN version
GPU model and memory
Exact command to reproduce

I have formatted my post with the template provided, I am still facing the same problem as mentioned. I hope someone has a solution to the same :(

Did anyone solved this?

you can specify your won evaluator with iou threshold you want in eval.py
for example.
add

evaluator1=object_detection_evaluation.PascalDetectionEvaluator(categories=categories,matching_iou_threshold=0.3)

and then add

evaluator_list=[evaluator1]

in your

evaluator.evaluate()

function

Have you tried to change 'iou_threshold' in the config_file?

second_stage_post_processing {

      batch_non_max_suppression {

        score_threshold: 0.0

        iou_threshold: 0.6

        max_detections_per_class: 100

        max_total_detections: 300

      }

hi @TitusTom @haichaoyu , maybe you can try like this. add matching_iou_threshold field in eval.proto, regenerate pb2.py with protoc. then add branch for your change in eval_util.py:

for eval_metric_fn_key in eval_metric_fn_keys:
    if eval_metric_fn_key in ('coco_detection_metrics', 'coco_mask_metrics'):
      evaluator_options[eval_metric_fn_key] = {
          'include_metrics_per_category': (
              eval_config.include_metrics_per_category)
      }
    elif eval_metric_fn_key == 'precision_at_recall_detection_metrics':
      evaluator_options[eval_metric_fn_key] = {
          'recall_lower_bound': (eval_config.recall_lower_bound),
          'recall_upper_bound': (eval_config.recall_upper_bound)
      }
    elif eval_metric_fn_key == 'pascal_voc_detection_metrics':
      evaluator_options[eval_metric_fn_key] = {
        'matching_iou_threshold': (eval_config.matching_iou_threshold)
      }
Was this page helpful?
0 / 5 - 0 ratings