Caffe: Allow images of different sizes as inputs for dense feature extraction

Created on 12 Dec 2014  路  1Comment  路  Source: BVLC/caffe

I noticed that in the Net Surgery for a Fully-Convolutional Model example, imagenet/imagenet_full_conv.prototxt specifies the input image size as: input_dim: 451. This will somehow resize every input image to [451 451 3] dimension to perform the dense feature extraction. I wonder if there is a way to produce any size input images without specifying input_dim in the prototxt file.

question

Most helpful comment

Since #594 nets can reshape to accept different input sizes. #1313 will make this automatic for the DATA layer. One can call net.blobs['data'].reshape(...) in Python to change the input size too.

Please ask usage questions on the caffe-users mailing list. Thanks!

>All comments

Since #594 nets can reshape to accept different input sizes. #1313 will make this automatic for the DATA layer. One can call net.blobs['data'].reshape(...) in Python to change the input size too.

Please ask usage questions on the caffe-users mailing list. Thanks!

Was this page helpful?
0 / 5 - 0 ratings

Related issues

weather319 picture weather319  路  3Comments

serimp picture serimp  路  3Comments

Ruhjkg picture Ruhjkg  路  3Comments

dfotland picture dfotland  路  3Comments

erogol picture erogol  路  3Comments