Hello,
I am running a modified cnn based faster-rcnn model with resnet-101.
The problem is that the console output is spoiled by thousands of lines of this kind:
I1230 13:51:30.970799 10176 solver.cpp:244] Train net output #24664: res5c = 0
I1230 13:51:30.970805 10176 solver.cpp:244] Train net output #24665: res5c = 0
I1230 13:51:30.970810 10176 solver.cpp:244] Train net output #24666: res5c = 0
I1230 13:51:30.970816 10176 solver.cpp:244] Train net output #24667: res5c = 0
I1230 13:51:30.970821 10176 solver.cpp:244] Train net output #24668: res5c = 0
I1230 13:51:30.970827 10176 solver.cpp:244] Train net output #24669: res5c = 0
I1230 13:51:30.970832 10176 solver.cpp:244] Train net output #24670: res5c = 0
I1230 13:51:30.970839 10176 solver.cpp:244] Train net output #24671: res5c = 0
I1230 13:51:30.970844 10176 solver.cpp:244] Train net output #24672: res5c = 0
The problem is issued here #2889
But it not the case stated in #2889, all the 'layer' definitions are consistent.
It seems like to be conflicts between different network definitions through different caffe versions.
However I have no idea at all for how to debug for this issue.
I upload the train_agnostic.prototxt for others' review.
train_agnostic.txt
Thank you in advance.
Chuck
Thanks for including the prototxt. The blob "res5c" is not consumed by any other layer (it has an in-place operation performed on it with "res5c_relu", but then nothing is done to it). This is what Caffe does for top-level blobs ("leaves" of the network). This makes sense for loss and accuracy layers, where you want the output printed during training.
I'm guessing this is a bug in your prototxt - unless you have a reason for res5c to be dangling. If you still have issues, please email caffe users.
Perhaps Caffe layers or networks should have a maximum-element-printouts parameter, and if it is reached, Caffe could give a warning or throw an error.
Thank you so much! The issue is solved by your suggestion. I just deleted all the lines after res4_22 which are not used by later faster rcnn code. @williford
Most helpful comment
Thanks for including the prototxt. The blob "res5c" is not consumed by any other layer (it has an in-place operation performed on it with "res5c_relu", but then nothing is done to it). This is what Caffe does for top-level blobs ("leaves" of the network). This makes sense for loss and accuracy layers, where you want the output printed during training.
I'm guessing this is a bug in your prototxt - unless you have a reason for res5c to be dangling. If you still have issues, please email caffe users.
Perhaps Caffe layers or networks should have a maximum-element-printouts parameter, and if it is reached, Caffe could give a warning or throw an error.