Caffe: depth wise convolution

Created on 26 May 2017  Â·  16Comments  Â·  Source: BVLC/caffe

Caffe training depth wise convolution is very slow. Is there has plan to reimplement the depth wise convolution?

Most helpful comment

You may be interested in this https://github.com/BVLC/caffe/pull/5665

All 16 comments

Do you mean the parameter group in conv layer ?

On 26 May 2017 9:40 a.m., "zjchuyp" notifications@github.com wrote:

Caffe train depth wise convolution is very slow. Is there have plan to
reimplement the depth wise convolution?

—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/BVLC/caffe/issues/5649, or mute the thread
https://github.com/notifications/unsubscribe-auth/ADfjjE3ju8Qv8kSVwdXbe2MygHJWK0vDks5r9oIIgaJpZM4NnRlG
.

Yes, depth wise convolution is in paper "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications" https://arxiv.org/abs/1704.04861
Caffe can train this net by set group number ==input channel number, but train speed is very slow because Caffe use "for" to do group number times im2col+sgemm. TF has new implement to depth wise conv.

I also tried it several weeks ago, you are right, low speed and high memory consuming.

I met this problem either. I saw the TF function called "DepthwiseConv2DKernel" , I didn't find any difference except TF uses EIGEN. Do you solve this problem?

You may be interested in this https://github.com/BVLC/caffe/pull/5665

@zjchuyp

Yes, depth wise convolution is in paper "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications" https://arxiv.org/abs/1704.04861
Caffe can train this net by set group number ==input channel number, but train speed is very slow because Caffe use "for" to do group number times im2col+sgemm. TF has new implement to depth wise conv.

I think Caffe doesnt perform im2col #group times:

```template
void BaseConvolutionLayer::forward_cpu_gemm(const Dtype* input,
const Dtype* weights, Dtype* output, bool skip_im2col) {
const Dtype* col_buff = input;
if (!is_1x1_) {
if (!skip_im2col) {
conv_im2col_cpu(input, col_buffer_.mutable_cpu_data());
}
col_buff = col_buffer_.cpu_data();
}
for (int g = 0; g < group_; ++g) {
caffe_cpu_gemm(CblasNoTrans, CblasNoTrans, conv_out_channels_ /
group_, conv_out_spatial_dim_, kernel_dim_,
(Dtype)1., weights + weight_offset_ * g, col_buff + col_offset_ * g,
(Dtype)0., output + output_offset_ * g);
}
}

@lolongcovas
You are right! thx.

@willyd
thanks a lot, I'll try it.

@lolongcovas, @willyd
Can you please share your commit/code if you have for this? Thanks.

@zjchuyp
Hi, TF also use a convert to combine the continue memory for gemm. And depend on it's data structure ( traveled by the channel ), it has more continued memory which can use SIMD to get a high speed . And it also has a process to combine the data like "Im2col" in Caffe. So why use this way , it can faster several times than caffe?

It is still slow using cudnn implementation? According to the code, cudnn convolution calls w.r.t to all groups are all asynchronous at different cuda streams and will be synchronized at the end of forward/backward. Therefore GPU should be make use of as much as possible.

@gzygzy9211
I turn off cudnn or it will crash (Check failed: status == CUDNN_STATUS_SUCCESS).

@willyd
thanks a lot

@birdwcp I think you should digging into it to find the reason

hi
how are u all
pls i am looking about conv layer with out im2col
i want it to take input from im2col output

To get faster depthwise convolutions there is a separate gemm call that needs to be implemented. As far as I know, no one against this version of Caffe has submitted a PR to do so.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

shiorioxy picture shiorioxy  Â·  3Comments

vladislavdonchev picture vladislavdonchev  Â·  3Comments

lixin7895123 picture lixin7895123  Â·  3Comments

inferrna picture inferrna  Â·  3Comments

erogol picture erogol  Â·  3Comments