Models: AttributeError: 'SSDResNet50V1FpnKerasFeatureExtractor' object has no attribute 'restore_from_classification_checkpoint_fn'

Created on 13 Jul 2020  路  8Comments  路  Source: tensorflow/models

Prerequisites

Please answer the following questions for yourself before submitting an issue.

  • [x] I am using the latest TensorFlow Model Garden release and TensorFlow 2.
  • [x] I am reporting the issue to the correct repository. (Model Garden official or research directory)
  • [x] I checked to make sure that this issue has not been filed already.

1. The entire URL of the file you are using

https://github.com/tensorflow/models/tree/master/official/...

2. Describe the bug

I followed this official documentation:
https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/training.html#training-the-model

I am using training/ssd_resnet50_v1_fpn_640x640_coco17_tpu-8, have edited its config file and all as per the docs.
When I run train.py I get the error
```I0713 11:22:44.929779 139813510887232 sync_replicas_optimizer.py:187] SyncReplicasV2: replicas_to_aggregate=8; total_num_replicas=1
Traceback (most recent call last):
File "train.py", line 186, in
tf.app.run()
File "/opt/anaconda3/envs/cdsl/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 40, in run
_run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
File "/opt/anaconda3/envs/cdsl/lib/python3.6/site-packages/absl/app.py", line 299, in run
_run_main(main, args)
File "/opt/anaconda3/envs/cdsl/lib/python3.6/site-packages/absl/app.py", line 250, in _run_main
sys.exit(main(argv))
File "/opt/anaconda3/envs/cdsl/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 324, in new_func
return func(args, *kwargs)
File "train.py", line 182, in main
graph_hook_fn=graph_rewriter_fn)
File "/home/yaser.sakkaf/Object_Detection/TensorFlow/models/research/object_detection/legacy/trainer.py", line 392, in train
train_config.load_all_detection_checkpoint_vars))
File "/home/yaser.sakkaf/Object_Detection/TensorFlow/models/research/object_detection/meta_architectures/ssd_meta_arch.py", line 1277, in restore_map
return self._feature_extractor.restore_from_classification_checkpoint_fn(
AttributeError: 'SSDResNet50V1FpnKerasFeatureExtractor' object has no attribute 'restore_from_classification_checkpoint_fn'
ERROR:tensorflow:==================================
Object was never used (type ):

If you want to mark it as used call its "mark_used()" method.
It was originally created here:
File "/opt/anaconda3/envs/cdsl/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 324, in new_func
return func(args, *kwargs) File "train.py", line 182, in main
graph_hook_fn=graph_rewriter_fn) File "/home/yaser.sakkaf/Object_Detection/TensorFlow/models/research/object_detection/legacy/trainer.py", line 415, in train
saver=saver) File "/opt/anaconda3/envs/cdsl/lib/python3.6/site-packages/tensorflow/python/training/sync_replicas_optimizer.py", line 358, in apply_gradients
return train_op File "/opt/anaconda3/envs/cdsl/lib/python3.6/site-packages/tensorflow/python/util/tf_should_use.py", line 237, in wrapped

error_in_function=error_in_function)

E0713 11:22:55.322450 139813510887232 tf_should_use.py:92] ==================================
Object was never used (type ):

If you want to mark it as used call its "mark_used()" method.
It was originally created here:
File "/opt/anaconda3/envs/cdsl/lib/python3.6/site-packages/tensorflow/python/util/deprecation.py", line 324, in new_func
return func(args, *kwargs) File "train.py", line 182, in main
graph_hook_fn=graph_rewriter_fn) File "/home/yaser.sakkaf/Object_Detection/TensorFlow/models/research/object_detection/legacy/trainer.py", line 415, in train
saver=saver) File "/opt/anaconda3/envs/cdsl/lib/python3.6/site-packages/tensorflow/python/training/sync_replicas_optimizer.py", line 358, in apply_gradients
return train_op File "/opt/anaconda3/envs/cdsl/lib/python3.6/site-packages/tensorflow/python/util/tf_should_use.py", line 237, in wrapped

error_in_function=error_in_function)


## 3. Steps to reproduce

python train.py --logtostderr --train_dir=training/ --pipeline_config_path=training/ssd_resnet50_v1_fpn_640x640_coco17_tpu-8.config

## 4. Expected behavior

The training should start

## 5. Additional context

Have a look at my config file: **ssd_resnet50_v1_fpn_640x640_coco17_tpu-8.config**

SSD with Resnet 50 v1 FPN feature extractor, shared box predictor and focal

loss (a.k.a Retinanet).

See Lin et al, https://arxiv.org/abs/1708.02002

Trained on COCO, initialized from Imagenet classification checkpoint

Train on TPU-8

#

Achieves 34.3 mAP on COCO17 Val

model {
ssd {
inplace_batchnorm_update: true
freeze_batchnorm: false
num_classes: 8
box_coder {
faster_rcnn_box_coder {
y_scale: 10.0
x_scale: 10.0
height_scale: 5.0
width_scale: 5.0
}
}
matcher {
argmax_matcher {
matched_threshold: 0.5
unmatched_threshold: 0.5
ignore_thresholds: false
negatives_lower_than_unmatched: true
force_match_for_each_row: true
use_matmul_gather: true
}
}
similarity_calculator {
iou_similarity {
}
}
encode_background_as_zeros: true
anchor_generator {
multiscale_anchor_generator {
min_level: 3
max_level: 7
anchor_scale: 4.0
aspect_ratios: [1.0, 2.0, 0.5]
scales_per_octave: 2
}
}
image_resizer {
fixed_shape_resizer {
height: 640
width: 640
}
}
box_predictor {
weight_shared_convolutional_box_predictor {
depth: 256
class_prediction_bias_init: -4.6
conv_hyperparams {
activation: RELU_6,
regularizer {
l2_regularizer {
weight: 0.0004
}
}
initializer {
random_normal_initializer {
stddev: 0.01
mean: 0.0
}
}
batch_norm {
scale: true,
decay: 0.997,
epsilon: 0.001,
}
}
num_layers_before_predictor: 4
kernel_size: 3
}
}
feature_extractor {
type: 'ssd_resnet50_v1_fpn_keras'
fpn {
min_level: 3
max_level: 7
}
min_depth: 16
depth_multiplier: 1.0
conv_hyperparams {
activation: RELU_6,
regularizer {
l2_regularizer {
weight: 0.0004
}
}
initializer {
truncated_normal_initializer {
stddev: 0.03
mean: 0.0
}
}
batch_norm {
scale: true,
decay: 0.997,
epsilon: 0.001,
}
}
override_base_feature_extractor_hyperparams: true
}
loss {
classification_loss {
weighted_sigmoid_focal {
alpha: 0.25
gamma: 2.0
}
}
localization_loss {
weighted_smooth_l1 {
}
}
classification_weight: 1.0
localization_weight: 1.0
}
normalize_loss_by_num_matches: true
normalize_loc_loss_by_codesize: true
post_processing {
batch_non_max_suppression {
score_threshold: 1e-8
iou_threshold: 0.6
max_detections_per_class: 100
max_total_detections: 100
}
score_converter: SIGMOID
}
}
}

train_config: {
fine_tune_checkpoint_version: V2
fine_tune_checkpoint: "/home/yaser.sakkaf/Object_Detection/TensorFlow/workspace/training_demo/pre-trained-model/ssd_resnet50_v1_fpn_640x640_coco17_tpu-8/resnet50.ckpt-1"
fine_tune_checkpoint_type: "classification"
batch_size: 64
sync_replicas: true
startup_delay_steps: 0
replicas_to_aggregate: 8
use_bfloat16: true
num_steps: 25000
data_augmentation_options {
random_horizontal_flip {
}
}
data_augmentation_options {
random_crop_image {
min_object_covered: 0.0
min_aspect_ratio: 0.75
max_aspect_ratio: 3.0
min_area: 0.75
max_area: 1.0
overlap_thresh: 0.0
}
}
optimizer {
momentum_optimizer: {
learning_rate: {
cosine_decay_learning_rate {
learning_rate_base: .04
total_steps: 25000
warmup_learning_rate: .013333
warmup_steps: 2000
}
}
momentum_optimizer_value: 0.9
}
use_moving_average: false
}
max_number_of_boxes: 100
unpad_groundtruth_tensors: false
}

train_input_reader: {
label_map_path: "/home/yaser.sakkaf/Object_Detection/TensorFlow/workspace/training_demo/training/kyc_label_map.pbtxt"
tf_record_input_reader {
input_path: "/home/yaser.sakkaf/Object_Detection/TensorFlow/workspace/training_demo/annotations/train.tfrecord-00000-of-00001"
}
}

eval_config: {
metrics_set: "coco_detection_metrics"
use_moving_averages: false
}

eval_input_reader: {
label_map_path: "/home/yaser.sakkaf/Object_Detection/TensorFlow/workspace/training_demo/training/kyc_label_map.pbtxt"
shuffle: false
num_epochs: 1
tf_record_input_reader {
input_path: "/home/yaser.sakkaf/Object_Detection/TensorFlow/workspace/training_demo/annotations/test.tfrecord-00000-of-00001"
}
}
```

6. System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
    NAME="Red Hat Enterprise Linux"
    VERSION="8.2 (Ootpa)"

  • Mobile device name if the issue happens on a mobile device:

  • TensorFlow installed from (source or binary): binary
  • TensorFlow version (use command below): '2.2.0'
  • Python version: Python 3.6.10
  • Bazel version (if compiling from source):
  • GCC/Compiler version (if compiling from source):
  • CUDA/cuDNN version: V10.1.243/ MAJOR 7
  • GPU model and memory: Tesla P100 16 GB
research bug

Most helpful comment

I solves it just now,change fine_tune_checkpoint_type in config from classification to detection.It works for me.

All 8 comments

Hi @yasersakkaf, I have come across your issue by chance. Being the author of the tutorial you have pointed out (i.e. TensorFlow Object Detection API tutorial), I am glad to see that people are coming across and using my tutorial but please note that it is NOT an official documentation and, as is pointed out at the start of the tutorial, it is NOT intended for TensorFlow 2. Since 3 days ago, when TensorFlow announced that the Object Detection API is now compatible with TensorFlow 2, I have started working on a version of the tutorial for TensorFlow 2 and hopefully should not take long to publish.

Now, having said that, will provide my 2 cents on the issue I can spot:

  • The model you are trying to train is only compatible with TensorFlow 2, however the training script you are using (i.e. train.py) is a legacy script for TensorFlow 1. Even for TF1 compatible models, TensorFlow themselves suggest that you use the model_main.py script instead. From what I can see, the TensorFlow team have now provided a model_main_tf2.py script which should be used for training TensorFlow 2 compatible models.

Based on the above, I would suggest that you try using model_main_tf2.py. Hopefully it should solve your problems.

Hi @yasersakkaf, I have come across your issue by chance. Being the author of the tutorial you have pointed out (i.e. TensorFlow Object Detection API tutorial), I am glad to see that people are coming across and using my tutorial but please note that it is NOT an official documentation and, as is pointed out at the start of the tutorial, it is NOT intended for TensorFlow 2. Since 3 days ago, when TensorFlow announced that the Object Detection API is now compatible with TensorFlow 2, I have started working on a version of the tutorial for TensorFlow 2 and hopefully should not take long to publish.

Now, having said that, will provide my 2 cents on the issue I can spot:

  • The model you are trying to train is only compatible with TensorFlow 2, however the training script you are using (i.e. train.py) is a legacy script for TensorFlow 1. Even for TF1 compatible models, TensorFlow themselves suggest that you use the model_main.py script instead. From what I can see, the TensorFlow team have now provided a model_main_tf2.py script which should be used for training TensorFlow 2 compatible models.

Based on the above, I would suggest that you try using model_main_tf2.py. Hopefully it should solve your problems.

Hi, @sglvladi I tried using the model_main_tf2.py file but it runs into an assertion error.

AssertionError: Some Python objects were not bound to checkpointed values, likely due to changes in the Python program: [MirroredVariable:{
0: array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0.], dtype=float32)>
}, SyncOnReadVariable:{
0: array([1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1.], dtype=float32)>
}, MirroredVariable:{
0: array([[[[-0.01707526, 0.00958763, -0.00717184, ..., -0.02416885,
-0.00057376, -0.01798092],
[ 0.04906039, 0.01125006, 0.01335552, ..., 0.02286207,
-0.02201398, 0.02823513],
[-0.00016151, -0.03787 , 0.04502724, ..., 0.00714924,
-0.00134363, 0.01435952],
...,
[-0.02812941, 0.02263711, -0.04167616, ..., -0.01233527,
0.03783008, -0.01089005],
[-0.05212687, -0.02553707, 0.01818946, ..., 0.02961936,
-0.00516193, 0.0012524 ],
[-0.03134669, -0.00041333, 0.01559198, ..., -0.01004329,
0.01805542, 0.01051651]]]], dtype=float32)>
}, SyncOnReadVariable:{
0: array([1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1.], dtype=float32)>
}, MirroredVariable:{
0: kernel:0' shape=(3, 3, 256, 256) dtype=float32, numpy=
array([[[[ 3.27228568e-02, -4.64998297e-02, 2.26492342e-02, ...,
-9.74284369e-04, 1.62380822e-02, 1.22416969e-02],
[ 1.15178840e-03, -3.68703827e-02, 1.10658472e-02, ...,
9.48619843e-03, -1.03370650e-02, -2.01857667e-02],
[-4.49100398e-02, 4.56731766e-03, -2.60404553e-02, ...,
-3.57230045e-02, 2.89462395e-02, 2.32595820e-02],
...,
[ 5.30778151e-03, -1.52581874e-02, 5.07325232e-02, ...,
-1.81722287e-02, 2.16051117e-02, 2.14435030e-02],
[ 1.46539198e-04, -3.11381929e-02, 5.47882952e-02, ...,
-2.38801632e-02, 2.32696254e-02, 4.43637446e-02],
[-1.87310837e-02, -5.82958339e-03, -1.92862097e-02, ...,
4.06394200e-03, 1.02567929e-03, -3.71383093e-02]],

    [[ 3.09852767e-04,  1.84826665e-02,  1.10799717e-02, ...,
      -5.15537187e-02, -3.44985574e-02,  2.73143370e-02],
     [-4.45511229e-02,  2.59385705e-02,  1.27841597e-02, ...,
       2.68923435e-02,  1.25884442e-02,  4.94388603e-02],
     [ 2.53257863e-02,  2.85056438e-02, -1.76545121e-02, ...,
      -5.72854653e-03, -1.25781083e-02, -1.72172617e-02],
     ...,
     [-1.29700834e-02, -1.66068263e-02,  1.31997811e-02, ...,
       3.32433619e-02, -3.28770950e-02, -5.64753357e-03],
     [ 1.99922118e-02, -1.66817997e-02,  2.01001670e-02, ...,
      -1.20146724e-03, -4.43595648e-02, -6.32286351e-03],
     [-9.85191483e-03,  1.16890101e-02, -3.15911062e-02, ...,
       5.09138852e-02,  3.21639664e-02,  1.63204260e-02]],

    [[-3.86795285e-03,  5.74836023e-02,  2.46323701e-02, ...,
       5.63700944e-02, -1.09342104e-02,  2.56307460e-02],
     [-1.79065745e-02,  9.25072655e-03,  4.66218404e-02, ...,
      -1.60164712e-03, -9.95197147e-03,  7.33452896e-03],
     [ 6.18398050e-03,  3.75087210e-03,  3.25238588e-03, ...,
       4.87894518e-03, -3.98306958e-02, -3.00692357e-02],
     ...,
     [ 2.28474401e-02, -4.06907778e-03, -1.01428619e-02, ...,
      -5.08982763e-02, -1.90947074e-02, -1.02198925e-02],
     [ 1.76853482e-02, -4.01869863e-02,  2.57494766e-02, ...,
       4.35005687e-02, -2.03603487e-02,  3.32968011e-02],
     [ 3.97564471e-02,  1.81089912e-03,  3.08403634e-02, ...,
       2.75062975e-02,  4.40388732e-02, -3.78778065e-03]]],


   [[[-2.06408789e-03,  1.23341177e-02, -3.24014686e-02, ...,
      -1.57837886e-02,  3.78022939e-02, -2.45624781e-02],
     [-3.31344567e-02,  3.22542861e-02, -7.50783551e-03, ...,
      -2.97184791e-02,  2.04867939e-03,  4.27598022e-02],
     [-1.91670586e-03,  1.80682782e-02, -1.37111841e-04, ...,
      -8.29781964e-03, -5.18870354e-02,  8.29925667e-03],
     ...,
     [-3.00713070e-02,  3.17789651e-02, -5.43805864e-03, ...,
       5.14633488e-03,  2.04850510e-02,  8.53229081e-04],
     [-1.75939947e-02, -1.57380663e-02, -5.82311191e-02, ...,
      -1.45037677e-02, -4.31211442e-02, -2.77433023e-02],
     [ 3.45812738e-02,  3.32441367e-02,  3.96623649e-02, ...,
      -1.74884479e-02, -2.97469832e-02, -8.24440178e-03]],

    [[-1.05039785e-02, -1.77232246e-03,  4.23682854e-02, ...,
      -4.34979447e-04,  2.25947965e-02, -3.45481820e-02],
     [-1.14587275e-02, -5.39271906e-02, -1.96545161e-02, ...,
       2.46651676e-02,  2.82592904e-02, -9.07700881e-03],
     [ 8.34470242e-03,  6.71633938e-03, -1.72526538e-02, ...,
       9.24559683e-03,  1.61107741e-02, -1.37366382e-02],
     ...,
     [ 7.32705370e-03, -5.88574447e-02,  1.48422904e-02, ...,
       1.92768760e-02,  3.37072019e-03, -2.10051378e-03],
     [ 2.09081713e-02,  2.74857637e-02,  3.07140853e-02, ...,
       2.00075805e-02,  2.16695257e-02, -1.59486718e-02],
     [-2.14965660e-02,  1.89268589e-02,  3.60634401e-02, ...,
       4.78059947e-02,  5.09278761e-05,  1.57136526e-02]],

    [[ 7.37132737e-03, -2.25759204e-03, -3.59263718e-02, ...,
       2.52463780e-02,  4.76760790e-02,  4.71004918e-02],
     [-2.28453912e-02,  2.64165942e-02, -4.87851426e-02, ...,
      -1.35132503e-02,  5.43055758e-02,  2.97849886e-02],
     [ 2.12297924e-02,  2.82179136e-02,  4.75545460e-03, ...,
       4.07731719e-02,  2.05025766e-02, -1.93230007e-02],
     ...,
     [-1.35536185e-02, -6.52173208e-03, -3.31815742e-02, ...,
      -2.36513969e-02,  1.08123086e-02, -2.49465760e-02],
     [ 7.20697921e-03,  4.15149070e-02,  3.81872021e-02, ...,
      -1.86374262e-02,  2.52859909e-02, -8.72338004e-03],
     [-3.07663288e-02,  1.52675789e-02, -1.98961478e-02, ...,
       2.74363030e-02, -2.98791360e-02,  2.74290815e-02]]],


   [[[-1.17343655e-02,  2.79004164e-02,  8.14821851e-03, ...,
      -5.48357554e-02,  2.57202685e-02,  1.82349572e-03],
     [ 1.45189604e-02,  3.31878588e-02, -4.12571384e-03, ...,
      -7.51909800e-03, -1.23692516e-04, -1.26814544e-02],
     [-5.20140864e-02, -1.29195284e-02,  3.83259952e-02, ...,
       3.09344307e-02,  5.55465110e-02, -1.90247949e-02],
     ...,
     [ 3.27149890e-02,  4.25899252e-02,  1.14404783e-02, ...,
       3.92444842e-02, -6.73335651e-03, -5.89835551e-03],
     [-2.93021556e-02, -2.41780020e-02, -2.00285073e-02, ...,
      -1.67844165e-02,  1.02363073e-03,  8.82100780e-03],
     [ 4.73203510e-03,  4.16395860e-03, -1.25678796e-02, ...,
      -1.20047564e-02, -3.38703208e-02, -2.34036446e-02]],

    [[-1.45381037e-02,  4.26705414e-03, -3.10072154e-02, ...,
       3.91582176e-02,  6.06609089e-03, -2.90890671e-02],
     [ 1.69566553e-02, -1.80759020e-02,  3.00300913e-03, ...,
       4.88655381e-02,  1.93736963e-02, -5.15976436e-02],
     [ 1.66133195e-02,  1.21967960e-02, -2.54301280e-02, ...,
       1.63294747e-03, -2.20809747e-02,  3.80099155e-02],
     ...,
     [-6.48238277e-03, -2.68049026e-03,  2.17775349e-02, ...,
       1.33615704e-02, -2.65071467e-02,  2.40839552e-03],
     [-2.55765729e-02, -2.35671792e-02,  1.16431015e-02, ...,
      -4.62635569e-02,  6.25379896e-03, -3.74044478e-02],
     [-7.19506899e-03, -2.13648938e-03,  5.19511849e-02, ...,
       2.81583965e-02,  1.26948263e-02,  2.88849836e-03]],

    [[ 4.39870134e-02, -8.24784301e-03, -1.49549113e-03, ...,
      -1.49231181e-02,  7.47038983e-03, -3.91276032e-02],
     [-2.81133354e-02,  4.21769321e-02, -8.54229461e-03, ...,
      -4.39029187e-02, -2.35656351e-02, -4.80338521e-02],
     [ 2.91682687e-02,  1.38527127e-02, -4.66100462e-02, ...,
      -3.57314609e-02, -1.00684119e-02,  1.43007771e-03],
     ...,
     [ 2.25727223e-02, -2.82618720e-02, -2.48826575e-02, ...,
       3.35404277e-02,  2.28296351e-02,  9.69987083e-03],
     [-1.24815328e-03,  1.62562709e-02,  1.01539893e-02, ...,
      -3.68279628e-02,  1.76186990e-02,  4.90484666e-03],
     [ 6.27618097e-03,  2.45273281e-02, -2.35290602e-02, ...,
      -1.47941932e-02,  2.41242279e-03,  9.44997184e-03]]]],
  dtype=float32)>

@Hs1000 I am not affiliated with the TensorFlow team, but I would advise that you open a new issue with details of your setup. It seem sensible since it is not clear as to how your comment is related to this issue.

What @sglvladi mentioned in this comment is correct. train.py is really legacy, one should use model_main.py for TF1 training and model_main_tf2.py for TF2 training.

@Hs1000 Please open a separate issue with more details. The large chunk of array is not providing any info.

@Hs1000 I encountered the same error, if you want to fine-tune a model you should change fine_tune_checkpoint_type in config from classification to fine_tune, that solves it

Hi @veegalinova, I had the same error with faster_rcnn: AssertionError: Some Python objects were not bound to checkpointed values, likely due to changes in the Python program.

Faster_rcnn did't supported fine_tune_checkpoint_type: fine_tune.

I solves it just now,change fine_tune_checkpoint_type in config from classification to detection.It works for me.

@Hs1000 I encountered the same error, if you want to fine-tune a model you should change fine_tune_checkpoint_type in config from classification to fine_tune, that solves it

python model_main_tf2.py --logtostderr --train_dir=training/ --pipeline_config_path=training/ssd_mobilenet_v2_fpnlite_320x320_coco17_tpu-8.config

got this error -

Traceback (most recent call last):
File "model_main_tf2.py", line 106, in
tf.compat.v1.app.run()
File "C:\Users\Mandar\AppData\Roaming\Python\Python37\site-packages\tensorflow\python\platform\app.py", line 40, in run
_run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
File "C:\Users\Mandar\anaconda3\envs\gputest\lib\site-packages\absl\app.py", line 299, in run
_run_main(main, args)
File "C:\Users\Mandar\anaconda3\envs\gputest\lib\site-packages\absl\app.py", line 250, in _run_main
sys.exit(main(argv))
File "model_main_tf2.py", line 103, in main
use_tpu=FLAGS.use_tpu)
File "C:\Users\Mandar\anaconda3\envs\gputest\lib\site-packages\object_detection\model_lib_v2.py", line 532, in train_loop
os.path.join(model_dir, 'train'))
File "C:\Users\Mandar\anaconda3\envs\gputest\lib\ntpath.py", line 76, in join
path = os.fspath(path)
TypeError: expected str, bytes or os.PathLike object, not NoneType

Was this page helpful?
0 / 5 - 0 ratings