Models: struct2depth Convolution not supported for input with rank

Created on 3 Mar 2019  ·  14Comments  ·  Source: tensorflow/models

System information

  • What is the top-level directory of the model you are using: models/research/struct2depth/
  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow): No
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): WIndows 10
  • TensorFlow installed from (source or binary): pip install tensorflow-gpu
  • TensorFlow version (use command below): 1.5
  • Bazel version (if compiling from source): -
  • CUDA/cuDNN version: CUDA 9 cuDNN 7
  • GPU model and memory: GTX 1050 2GB
  • Exact command to reproduce:
    python train.py \ --logtostderr \ --checkpoint_dir D:\Kuliah\S2\Depth_Estimation\models-master\research\struct2depth\train_checkpoint \ --data_dir D:\Kuliah\S2\Depth_Estimation\models-master\research\struct2depth\KITTI_procesed \ --architecture resnet \ --imagenet_ckpt resnet_pretrained/model.ckpt \ --imagenet_norm true

Describe the problem

  1. Hello I tried to run train.py, but I found problem with Convolution not supported for input with rank, Is there any wrong with my training procedure, I generate train.txt manually with format (image-processed_file_path) (image-processed_file_name), Is there anything wrong with this??

  2. And in the data_dir path, which folder I need to use, KITTI_procesed folder from the gen_data_kitti.py or kitti-raw-uncompressed ??

  3. Last, where I can find model.ckpt file for resnet model?? because I only found pre-trained model from the https://sites.google.com/view/struct2depth

Thank You very much

Error :
I0304 12:27:19.917803 14332 reader.py:291] data_dir: D:\Kuliah\S2\Depth_Estimation\models-master\research\struct2depth\KITTI_procesed
I0304 12:27:20.101785 14332 reader.py:158] image_stack: Tensor("data_loading/batching/shuffle_batch:0", shape=(4, 128, 416, 9), dtype=float32)
Traceback (most recent call last):
File "train.py", line 259, in
app.run(main)
File "F:\Anaconda3\envsdepth\lib\site-packages\absl\app.py", line 300, in run
_run_main(main, args)
File "F:\Anaconda3\envsdepth\lib\site-packages\absl\app.py", line 251, in _run_main
sys.exit(main(argv))
File "train.py", line 184, in main
size_constraint_weight=FLAGS.size_constraint_weight)
File "D:\Kuliah\S2\Depth_Estimation\models-master\research\struct2depth\model.py", line 158, in __init__
self.build_train_graph()
File "D:\Kuliah\S2\Depth_Estimation\models-master\research\struct2depth\model.py", line 169, in build_train_graph
self.build_inference_for_training()
File "D:\Kuliah\S2\Depth_Estimation\models-master\research\struct2depth\model.py", line 393, in build_inference_for_training
weight_reg=self.weight_reg)
File "D:\Kuliah\S2\Depth_Estimation\models-master\research\struct2depth\nets.py", line 129, in objectmotion_net
cnv1 = slim.conv2d(image_stack, 16, [7, 7], stride=2, scope='cnv1')
File "F:\Anaconda3\envsdepth\lib\site-packages\tensorflow\contrib\frameworkpython\ops\arg_scope.py", line 182, in func_with_args
return func(args, *current_args)
File "F:\Anaconda3\envsdepth\lib\site-packages\tensorflow\contrib\layerspython\layers\layers.py", line 1035, in convolution
input_rank)
ValueError: ('Convolution not supported for input with rank', None)

research support

All 14 comments

  1. hello, I wonder why you presist in generating train.txt, It seems that train.txt is not metioned in the train.py and other codes.
  2. Since _struct2depth_ is based on _vid2depth_ and _zhou's SfMLearner_, I haven't tried vid2depth, but when I run zhou's _prepare_train_data.py_ in /SfMLearner/data, a train.txt has been generated and the formate is like "2011_09_26_drive_0014_sync_03 0000000214" (folder_name picture_number).I hope it can help you.
  3. I follow you advice in https://github.com/tensorflow/models/issues/6297 , some errors arisen.
  1. hello, I wonder why you presist in generating train.txt, It seems that train.txt is not metioned in the train.py and other codes.
  2. Since _struct2depth_ is based on _vid2depth_ and _zhou's SfMLearner_, I haven't tried vid2depth, but when I run zhou's _prepare_train_data.py_ in /SfMLearner/data, a train.txt has been generated and the formate is like "2011_09_26_drive_0014_sync_03 0000000214" (folder_name picture_number).I hope it can help you.
  3. I follow you advice in https://github.com/tensorflow/models/issues/6297 , some errors arisen.

well I will try it again
I will tell you if I successfully train this model
Thank you for you responds

hello, could you tell me what do you put in the folder resnet_pretrained , I downloaded a model from their official website: https://sites.google.com/view/struct2depth#h.p_agwk4jrb2a_M (under the title Models) and put it in the resnet_pretrained. but now I feel confused....(if you could speak chinese, I would like to consult you in another way.. ::>_<:: .)

hello, could you tell me what do you put in the folder resnet_pretrained , I downloaded a model from their official website: https://sites.google.com/view/struct2depth#h.p_agwk4jrb2a_M (under the title Models) and put it in the resnet_pretrained. but now I feel confused....(if you could speak chinese, I would like to consult you in another way.. ::>_<:: .)

I download the pre-trained from that site too, fill it with for example : --imagenet_ckpt D:\Kuliah\S2\Depth_Estimation\struct2depth\checkpoint\model-199160

model.ckpt change with model-199160

Btw have you solve this error :
ValueError('Using a joint encoder is currently not supported when '
ValueError: Using a joint encoder is currently not supported when modeling object motion.

@godblezzme29 Try passing --joint_encoder=false. It is not necessary to use this. Our final models don't use a joint encoder.

@MJ0623 We started our training by intializing our ResNet-architecture with ImageNet-weights. To obtain the weights, you can convert a pre-trained ResNet18-model from torch into a tensorflow checkpoint. If you don't want to train yourself, then you can refer to the models we posted on our project websites, which are fully trained. You can follow the examples in the readme to load them.

(if you could speak chinese, I would like to consult you in another way.. ::>_<:: .)

对不起,我现在学中文,可是我的中文还不太好 :(

@MJ0623 We started our training by intializing our ResNet-architecture with ImageNet-weights. To obtain the weights, you can convert a pre-trained ResNet18-model from torch into a tensorflow checkpoint. If you don't want to train yourself, then you can refer to the models we posted on our project websites, which are fully trained. You can follow the examples in the readme to load them.

(if you could speak chinese, I would like to consult you in another way.. ::>_<:: .)

对不起,我现在学中文,可是我的中文还不太好 :(

@VincentCa Ummmm I got it, if we want train ourself we pass imagenet_ckpt="resnet_pretrained/model.ckpt" in the train.py , the model.ckpt is converted a pre-trained ResNet18-model from torch into a tensorflow right? we can't load the imagenet_ckpt = model-199160?

Yes, that’s right.

Yes, that’s right.

Oohh okayy, maybe this is my last question

  1. To train from scratch to be like your result, do we need to set joint_encoder to false and handle_motion to true or anything else?? and may I know how do you generate -fseg.png file?? because I already search how to generate -fseg file but I can not find any references.
  1. Is it possible to do fine tuning on your model by freezing few last layers?? or is it possible to retrain the model using your pre-trained weight??

I met "Convolution not supported for input with rank" too.
It's solved by using a newer TF.

pip uninstall tensorflow
pip uninstall tensorflow-gpu
pip install tensorflow==1.10.0
pip install tensorflow-gpu==1.10.0

@jtoy @bmabey @alextp @moonboots @ewilderj
Hello!I have meet the same problem, but I am running the "optimize.py" rather than the training code.My test data just has two images :052.png and 053.png. And the content of my triplet_list_file.txt is as following:
testdata/input 052
testdata/input 053

And the result I got is 'Convolution not supported for input with rank', what should I going on ?How to solve this problem.

Maybe gave me a structure of your project can solve this problem. I felt the error is the PATH I wrote wrong.Thanks very much!

  1. hello, I wonder why you presist in generating train.txt, It seems that train.txt is not metioned in the train.py and other codes.
  2. Since _struct2depth_ is based on _vid2depth_ and _zhou's SfMLearner_, I haven't tried vid2depth, but when I run zhou's _prepare_train_data.py_ in /SfMLearner/data, a train.txt has been generated and the formate is like "2011_09_26_drive_0014_sync_03 0000000214" (folder_name picture_number).I hope it can help you.
  3. I follow you advice in https://github.com/tensorflow/models/issues/6297 , some errors arisen.

Hello! I didn't find the train.txt file but it seems everyone mentioned that, what is that used for and is this file necessarily required to run train.py?
Thank you

hello, could you tell me what do you put in the folder resnet_pretrained , I downloaded a model from their official website: https://sites.google.com/view/struct2depth#h.p_agwk4jrb2a_M (under the title Models) and put it in the resnet_pretrained. but now I feel confused....(if you could speak chinese, I would like to consult you in another way.. ::>_<:: .)

I download the pre-trained from that site too, fill it with for example : --imagenet_ckpt D:\Kuliah\S2\Depth_Estimation\struct2depth\checkpoint\model-199160

model.ckpt change with model-199160

Btw have you solve this error :
ValueError('Using a joint encoder is currently not supported when '
ValueError: Using a joint encoder is currently not supported when modeling object motion.

Hello, I met the same problem when I use the model downloaded from https://sites.google.com/view/struct2depth#h.p_agwk4jrb2a_M. It shows the same error as yours.
Does it mean we must train on the pre-trained ResNet-18 rather than fine-tuning on author's model?

If someone is wondering where to get the pre-trained tf model for reset18, I found the following repository useful: https://github.com/dalgu90/resnet-18-tensorflow

Was this page helpful?
0 / 5 - 0 ratings