I noticed that in the Net Surgery for a Fully-Convolutional Model example, imagenet/imagenet_full_conv.prototxt specifies the input image size as: input_dim: 451. This will somehow resize every input image to [451 451 3] dimension to perform the dense feature extraction. I wonder if there is a way to produce any size input images without specifying input_dim in the prototxt file.
Since #594 nets can reshape to accept different input sizes. #1313 will make this automatic for the DATA layer. One can call net.blobs['data'].reshape(...) in Python to change the input size too.
Please ask usage questions on the caffe-users mailing list. Thanks!
Most helpful comment
Since #594 nets can reshape to accept different input sizes. #1313 will make this automatic for the
DATAlayer. One can callnet.blobs['data'].reshape(...)in Python to change the input size too.Please ask usage questions on the caffe-users mailing list. Thanks!