@shelhamer @longjon From my naive point of view, it looks like the requirements for FCN-x from the model zoo have made it into master; is this true?
I'm eventually hoping to do some fine tuning off of that work, but in the process, I thought I'd try creating a reproducible example of one of the FCN-X's working. Work started in a separate repo here: https://github.com/developmentseed/caffe-fcn/blob/master/src/fcn-fwd.ipynb -- before I take it further, just wanted to ask if you'd be interested in a PR adding something like this to the examples in this repo? (If so, I might as well set up a branch and work from there.)
I'd like to hear more from this because I am thinking to use FCN-x in my work and it would be nice to have this in master!
@anandthakker right, the requirements for FCNs (coordinate mapping + crop layer) were merged in #3613 and #3570​ on 03/05. Note however that the old jonlong/caffe:future definitions are incompatible since the configuration of the crop layer changed; that said the weights—the caffemodels—are since the layers with weights are unchanged.
We're working on posting new reference models with net spec, weights, solver, and scripts soon along with a notebook example.
Great, thanks!
Thanks for the update and explanation, @shelhamer
While the reference models and example notebook are hammered out you can see the fcn.berkeleyvision.org repo for master compatible FCNs, solver configurations, and scripts for learning, inference, and scoring. I'll follow up as more models are ported.
Happy brewing.
@shelhamer I have an FCN model originally created using jonlong/caffe:future. My FCN model works really well on a cardiac segmentation dataset using that branch. However, when I try to run the exact same model using the master branch with coordinate mapping and nd crop layer, the model gets considerably worse accuracy performance. In fact, I cannot reproduce the level of accuracy performance using the master branch. What is the difference in implementation between caffe-future and the master branch for the crop layer? Has anyone noticed the mismatch in accuracy performance between the two branches?
@vuptran From my testing on the crop from longjon:future versus the master branch, the longjon:future appears to center the crop (computing the offset using the DiagonalAffineMap), while the master branch by default locates the crop with default offset (0,0). This could be a major cause of your accuracy loss, since fusing layers with different crop offsets mean the features become misaligned. For now you can manually compute the offsets and give them like "crop_param { offset: 9 }" but this is more tedious to compute manually.
@vuptran @jasonbunk the master branch is compatible with jonlong/caffe:future networks _as long as you configure the crop layer_. The master edition separates out determining the crop coordinates—the coord_map #3613 in Python net spec—and actually doing the cropping by #3570. The parameters (the caffemodel) from the old branch are compatible with master, but the definition of the crop layer in the old-style architecture (the prototxt) is not.
but this is more tedious to compute manually
The point of coord_map is to compute this automatically from the net spec. See the coord_map import and crop definition in the VOC FCN-32s model at fcn.berkeleyvision.org for an example.
While you need to generate the proto in Python, you can then use the proto definition however you did before (whether Python or not).
@vuptran I hope that clears it up for you. Happy to hear you were able to make a cardiac segmentation FCN.
Dear @vuptran
I want to apply FCN on Cardiac MRI dataset and do the segmentation for right and left ventricle(I prepared VOI in .png and mask of ground truth in .png), Can you please kindly tell me how can I prepare them for training with FCN??
Thanks in advance.
Closing this as addressed by fcn.berkeleyvision.org
Most helpful comment
While the reference models and example notebook are hammered out you can see the fcn.berkeleyvision.org repo for master compatible FCNs, solver configurations, and scripts for learning, inference, and scoring. I'll follow up as more models are ported.
Happy brewing.