Models: Dimension mismatch when converting to TFlite

Created on 24 Aug 2018  路  6Comments  路  Source: tensorflow/models

System information

  • What is the top-level directory of the model you are using: object detection
  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow):No
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): 16.04
  • TensorFlow installed from (source or binary): Source
  • TensorFlow version (use command below): 1.10
  • Bazel version (if compiling from source): 0.16
  • CUDA/cuDNN version: 9.0
  • GPU model and memory: M60
  • Exact command to reproduce: N/A

I was trying to convert MobileNetSSDv1 to TFlite graph but I am getting dimension mismatch error. This is the config file I am using:

model {
  ssd {
    num_classes: 90
    box_coder {
      faster_rcnn_box_coder {
        y_scale: 10.0
        x_scale: 10.0
        height_scale: 5.0
        width_scale: 5.0
      }
    }
    matcher {
      argmax_matcher {
        matched_threshold: 0.5
        unmatched_threshold: 0.5
        ignore_thresholds: false
        negatives_lower_than_unmatched: true
        force_match_for_each_row: true
      }
    }
    similarity_calculator {
      iou_similarity {
      }
    }
    anchor_generator {
      ssd_anchor_generator {
        num_layers: 6
        min_scale: 0.2
        max_scale: 0.95
        aspect_ratios: 1.0
        aspect_ratios: 2.0
        aspect_ratios: 0.5
        aspect_ratios: 3.0
        aspect_ratios: 0.3333
      }
    }
    image_resizer {
      fixed_shape_resizer {
        height: 300
        width: 300
      }
    }
    box_predictor {
      convolutional_box_predictor {
        min_depth: 0
        max_depth: 0
        num_layers_before_predictor: 0
        use_dropout: false
        dropout_keep_probability: 0.8
        kernel_size: 1
        box_code_size: 4
        apply_sigmoid_to_scores: false
        conv_hyperparams {
          activation: RELU_6,
          regularizer {
            l2_regularizer {
              weight: 0.00004
            }
          }
          initializer {
            truncated_normal_initializer {
              stddev: 0.03
              mean: 0.0
            }
          }
          batch_norm {
            train: true,
            scale: true,
            center: true,
            decay: 0.9997,
            epsilon: 0.001,
          }
        }
      }
    }
    feature_extractor {
      type: 'ssd_mobilenet_v1'
      min_depth: 16
      depth_multiplier: 1.0
      conv_hyperparams {
        activation: RELU_6,
        regularizer {
          l2_regularizer {
            weight: 0.00004
          }
        }
        initializer {
          truncated_normal_initializer {
            stddev: 0.03
            mean: 0.0
          }
        }
        batch_norm {
          train: true,
          scale: true,
          center: true,
          decay: 0.9997,
          epsilon: 0.001,
        }
      }
    }
    loss {
      classification_loss {
        weighted_sigmoid {
        }
      }
      localization_loss {
        weighted_smooth_l1 {
        }
      }
      hard_example_miner {
        num_hard_examples: 3000
        iou_threshold: 0.99
        loss_type: CLASSIFICATION
        max_negatives_per_positive: 3
        min_negatives_per_image: 0
      }
      classification_weight: 1.0
      localization_weight: 1.0
    }
    normalize_loss_by_num_matches: true
    post_processing {
      batch_non_max_suppression {
        score_threshold: 1e-8
        iou_threshold: 0.6
        max_detections_per_class: 100
        max_total_detections: 100
      }
      score_converter: SIGMOID
    }
  }
}

train_config: {
  batch_size: 24
  optimizer {
    rms_prop_optimizer: {
      learning_rate: {
        exponential_decay_learning_rate {
          initial_learning_rate: 0.004
          decay_steps: 800720
          decay_factor: 0.95
        }
      }
      momentum_optimizer_value: 0.9
      decay: 0.9
      epsilon: 1.0
    }
  }
  fine_tune_checkpoint: "PATH_TO_BE_CONFIGURED/model.ckpt"
  from_detection_checkpoint: true
  # Note: The below line limits the training process to 200K steps, which we
  # empirically found to be sufficient enough to train the pets dataset. This
  # effectively bypasses the learning rate schedule (the learning rate will
  # never decay). Remove the below line to train indefinitely.
  num_steps: 200000
  data_augmentation_options {
    random_horizontal_flip {
    }
  }
  data_augmentation_options {
    ssd_random_crop {
    }
  }
}

train_input_reader: {
  tf_record_input_reader {
    input_path: "PATH_TO_BE_CONFIGURED/mscoco_train.record-?????-of-00100"
  }
  label_map_path: "PATH_TO_BE_CONFIGURED/mscoco_label_map.pbtxt"
}

eval_config: {
  num_examples: 8000
  # Note: The below line limits the evaluation process to 10 evaluations.
  # Remove the below line to evaluate indefinitely.
  max_evals: 10
}

eval_input_reader: {
  tf_record_input_reader {
    input_path: "PATH_TO_BE_CONFIGURED/mscoco_val.record-?????-of-00010"
  }
  label_map_path: "PATH_TO_BE_CONFIGURED/mscoco_label_map.pbtxt"
  shuffle: false
  num_readers: 1
}

I trained the model using this config file and froze the graph as per the instructions given in the repo. Now, when I try to convert the frozen graph to TFlite with this command:

bazel run -c opt tensorflow/contrib/lite/toco:toco -- \
  --input_format=TENSORFLOW_GRAPHDEF \
  --input_file=frozen_inference_graph.pb \
  --output_format=TFLITE \
  --output_file=cocodetect.tflite \
  --inference_type=QUANTIZED_UINT8 \
  --inference_input_type=QUANTIZED_UINT8 \
  --input_arrays=image_tensor \
  --output_arrays='detection_boxes','detection_scores','detection_classes','num_detections'  \
  --input_shapes=1,300,300,3 \
  --mean_values=128 \
  --std_values=128 \
  --change_concat_input_ranges=false \
  --allow_custom_ops 

It throws this error:

tensorflow/contrib/lite/toco/graph_transformations/resolve_constant_slice.cc:59] Check failed: dim_size >= 1 (0 vs. 1)

Most helpful comment

I met the same issue before, I solved by use export_tflite_ssd_graph instead of export_inference_graph.

python object_detection/export_tflite_ssd_graph.py --pipeline_config_path=$CONFIG_FILE --trained_checkpoint_prefix=$CHECKPOINT_PATH --output_directory=$OUTPUT_DIR --add_postprocessing_op=true

All 6 comments

i meet the same issue when converting the faster_rcnn_resnet101_coco_11_06_2017

tensorflow/contrib/lite/toco/graph_transformations/resolve_constant_slice.cc:59] Check failed: dim_size >= 1 (0 vs. 1)

I met the same issue before, I solved by use export_tflite_ssd_graph instead of export_inference_graph.

python object_detection/export_tflite_ssd_graph.py --pipeline_config_path=$CONFIG_FILE --trained_checkpoint_prefix=$CHECKPOINT_PATH --output_directory=$OUTPUT_DIR --add_postprocessing_op=true

Closing as this is resolved

I am trying to convert .pb to .tflite using ssd mobilenet v1 pets config.
Create .pb file using this command
sudo python export_inference_graph.py --input_type image_tensor --pipeline_config_path /home/ubuntu/training/data/ssd_mobilenet_v1_pets.config --trained_checkpoint_prefix /home/ubuntu/training/data/model.ckpt-78386 --output_directory /home/ubuntu/training/ --input_shape 1,300,300,3
then summarize the frozen_inference_graph.pb

its return this output
Found 1 possible inputs: (name=image_tensor, type=uint8(4), shape=[1,300,300,3])
No variables spotted.
Found 4 possible outputs: (name=detection_boxes, op=Identity) (name=detection_scores, op=Identity) (name=detection_classes, op=Identity) (name=num_detections, op=Identity)
Found 6130346 (6.13M) const parameters, 0 (0) variable parameters, and 187 control_edges
Op types used: 1781 Const, 255 GatherV2, 230 Identity, 224 Reshape, 207 Minimum, 164 Maximum, 116 Slice, 113 Cast, 103 Mul, 99 Sub, 95 ConcatV2, 88 Greater, 82 Where, 82 Split, 72 Add, 63 Pack, 63 StridedSlice, 50 Shape, 50 Unpack, 45 ExpandDims, 43 Squeeze, 41 ZerosLike, 41 NonMaxSuppressionV3, 39 Fill, 37 Tile, 35 Relu6, 35 FusedBatchNorm, 34 Conv2D, 33 RealDiv, 21 Switch, 16 Range, 13 DepthwiseConv2dNative, 12 BiasAdd, 6 Merge, 6 Sqrt, 3 Assert, 3 Equal, 3 Transpose, 2 Exp, 1 All, 1 TopKV2, 1 Size, 1 Sigmoid, 1 Placeholder
To use with tensorflow/tools/benchmark:benchmark_model try these arguments:
bazel run tensorflow/tools/benchmark:benchmark_model -- --graph=/home/jayasri/Downloads/ssd_mobilenet_v1_quantized_300x300_coco14_sync_2018_07_18/original/frozen_inference_graph.pb --show_flops --input_layer=image_tensor --input_layer_type=uint8 --input_layer_shape=1,300,300,3 --output_layer=detection_boxes,detection_scores,detection_classes,num_detections

Then trying to convert tflite using bazel-bin/tensorflow/lite/toco/toco

But I got this error
tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: Size
2019-05-29 15:46:02.589145: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before Removing unused ops: 2780 operators, 4997 arrays (0 quantized)
2019-05-29 15:46:02.794908: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] After Removing unused ops pass 1: 2741 operators, 4915 arrays (0 quantized)
2019-05-29 15:46:03.070306: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before general graph transformations: 2741 operators, 4915 arrays (0 quantized)
2019-05-29 15:46:03.162066: F tensorflow/lite/toco/graph_transformations/resolve_constant_slice.cc:59] Check failed: dim_size >= 1 (0 vs. 1)

Still i can't find the solution

I am trying to convert .pb to .tflite using ssd mobilenet v1 pets config.
Create .pb file using this command
sudo python export_inference_graph.py --input_type image_tensor --pipeline_config_path /home/ubuntu/training/data/ssd_mobilenet_v1_pets.config --trained_checkpoint_prefix /home/ubuntu/training/data/model.ckpt-78386 --output_directory /home/ubuntu/training/ --input_shape 1,300,300,3
then summarize the frozen_inference_graph.pb

its return this output
Found 1 possible inputs: (name=image_tensor, type=uint8(4), shape=[1,300,300,3])
No variables spotted.
Found 4 possible outputs: (name=detection_boxes, op=Identity) (name=detection_scores, op=Identity) (name=detection_classes, op=Identity) (name=num_detections, op=Identity)
Found 6130346 (6.13M) const parameters, 0 (0) variable parameters, and 187 control_edges
Op types used: 1781 Const, 255 GatherV2, 230 Identity, 224 Reshape, 207 Minimum, 164 Maximum, 116 Slice, 113 Cast, 103 Mul, 99 Sub, 95 ConcatV2, 88 Greater, 82 Where, 82 Split, 72 Add, 63 Pack, 63 StridedSlice, 50 Shape, 50 Unpack, 45 ExpandDims, 43 Squeeze, 41 ZerosLike, 41 NonMaxSuppressionV3, 39 Fill, 37 Tile, 35 Relu6, 35 FusedBatchNorm, 34 Conv2D, 33 RealDiv, 21 Switch, 16 Range, 13 DepthwiseConv2dNative, 12 BiasAdd, 6 Merge, 6 Sqrt, 3 Assert, 3 Equal, 3 Transpose, 2 Exp, 1 All, 1 TopKV2, 1 Size, 1 Sigmoid, 1 Placeholder
To use with tensorflow/tools/benchmark:benchmark_model try these arguments:
bazel run tensorflow/tools/benchmark:benchmark_model -- --graph=/home/jayasri/Downloads/ssd_mobilenet_v1_quantized_300x300_coco14_sync_2018_07_18/original/frozen_inference_graph.pb --show_flops --input_layer=image_tensor --input_layer_type=uint8 --input_layer_shape=1,300,300,3 --output_layer=detection_boxes,detection_scores,detection_classes,num_detections

Then trying to convert tflite using bazel-bin/tensorflow/lite/toco/toco

But I got this error
tensorflow/lite/toco/import_tensorflow.cc:1336] Converting unsupported operation: Size
2019-05-29 15:46:02.589145: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before Removing unused ops: 2780 operators, 4997 arrays (0 quantized)
2019-05-29 15:46:02.794908: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] After Removing unused ops pass 1: 2741 operators, 4915 arrays (0 quantized)
2019-05-29 15:46:03.070306: I tensorflow/lite/toco/graph_transformations/graph_transformations.cc:39] Before general graph transformations: 2741 operators, 4915 arrays (0 quantized)
2019-05-29 15:46:03.162066: F tensorflow/lite/toco/graph_transformations/resolve_constant_slice.cc:59] Check failed: dim_size >= 1 (0 vs. 1)

Still i can't find the solution

has anybody able to resolve this one????

I met the same issue before, I solved by use export_tflite_ssd_graph instead of export_inference_graph.

python object_detection/export_tflite_ssd_graph.py --pipeline_config_path=$CONFIG_FILE --trained_checkpoint_prefix=$CHECKPOINT_PATH --output_directory=$OUTPUT_DIR --add_postprocessing_op=true

Did someone successfully get a model with 4 outputs (like detection_boxes, detection_classes, detection_scores, num_boxes) with this code? Or am I the only one just getting two raw outputs (raw_outputs/box_encodings, raw_outputs/class_predictions) even by using add_postprocessing_op=true? I opened a new issue on this: https://github.com/tensorflow/tensorflow/issues/31015

Thanks for helping!

Was this page helpful?
0 / 5 - 0 ratings