Caffe: Allow images of different sizes as inputs for dense feature extraction

Created on 12 Dec 2014  路  1Comment  路  Source: BVLC/caffe

I noticed that in the Net Surgery for a Fully-Convolutional Model example, imagenet/imagenet_full_conv.prototxt specifies the input image size as: input_dim: 451. This will somehow resize every input image to [451 451 3] dimension to perform the dense feature extraction. I wonder if there is a way to produce any size input images without specifying input_dim in the prototxt file.

question

Most helpful comment

Since #594 nets can reshape to accept different input sizes. #1313 will make this automatic for the DATA layer. One can call net.blobs['data'].reshape(...) in Python to change the input size too.

Please ask usage questions on the caffe-users mailing list. Thanks!

>All comments

Since #594 nets can reshape to accept different input sizes. #1313 will make this automatic for the DATA layer. One can call net.blobs['data'].reshape(...) in Python to change the input size too.

Please ask usage questions on the caffe-users mailing list. Thanks!

Was this page helpful?
0 / 5 - 0 ratings

Related issues

ghost picture ghost  路  35Comments

wenwei202 picture wenwei202  路  194Comments

bcd33bcd picture bcd33bcd  路  34Comments

jyegerlehner picture jyegerlehner  路  38Comments

ghost picture ghost  路  55Comments