Hi,
I looked carefully at the list of available layers and I think the following is missing. Imagine you have two columns in a deep network and you want to sum them element-wise to form a new layer. How can you do that? As far as I understand, the closes thing to that, currently available would be to concatenate these layers and then use this concatenation as an input to an inner product layer, which could reduce the dimensionality. This however, this is introducing additional multiplication by a matrix, which I don't want, i.e. instead of getting v_1 + v_2, I only know how to get (v_1 * W_1 ) + (v_2 * W_2). Is it actually possible just to get v_1 + v_2 in some different way?
You can use EltwiseLayer to do that.
Can you give me an example of how to use it? The layer catalogue only mentions existence of such a layer but gives no clue on it's usage
You may find usage information from comments in https://github.com/BVLC/caffe/blob/master/src/caffe/proto/caffe.proto
Usage
layer {
name: "eltwise-sum"
type: "Eltwise"
bottom: "v1"
bottom: "v2"
top: "v1_v2_sum"
eltwise_param { operation: SUM }
}
Such usage questions (and installation questions) should be asked at caffe-users, not GitHub issue.
Thanks, that's what I was looking for. I asked this question at caffe-users but since nobody answered I thought it's not in caffe yet.
Hi ronghanghu, where are the weights trained in Eltwise layers, are they merged in filters of hidden layers?
Since Caffe now supports 3D convolution, does it work for the 3D element-wise sum as well?
Is there means for a weighted element-wise addition?
Most helpful comment
Thanks, that's what I was looking for. I asked this question at caffe-users but since nobody answered I thought it's not in caffe yet.