I'm seeing a lot of warning: ignoring #pragma omp parallel like this:

Maybe there should be a -fopenmp?
conda, pip, source): sourceThanks for the report. We don't directly use omp parallel in nms_cpu, so this might be somewhere else.
We will have a look.
@vfdev-5 Any news so far?
@xkszltl i'll try to take a look at this issue this week.
@vfdev-5 any update? :) I'm having the same problem
@klvnptr Sorry, seems like I missed this issue in my task list. Let me check this on monday. Thanks.
thanks for getting back @vfdev-5
for me this solved the problem:
$ apt-get install libopenmpi-dev
$ CFLAGS="-fopenmp" python3 setup.py bdist_wheel
@vfdev-5
I don't think that's the right way.
setup.py, or the issue is still there when calling cmake directly.find_package(OpenMP) and link to it.-fopenmp may cause inconsistent result with pytorch's openmp. There're many openmp implementations, not just the default one.Bring up issue in upstream as well: https://github.com/pytorch/pytorch/issues/48033
@xkszltl the warning warning: ignoring #pragma omp parallel is seen when we build torchvision with python setup.py build and more precisely when C++ extensions are built:
https://github.com/pytorch/vision/blob/74de51d6d478e289135d9274e6af550a9bfba137/setup.py#L423
As far as I understand this process, there is no CMake associated with that.
Blindly adding -fopenmp may cause inconsistent result with pytorch's openmp. There're many openmp implementations, not just the default one.
Could you please detail this point.
When adding openmp, you should use find_package(OpenMP) and link to it.
Maybe, this has to be addressed separately for C++ frontend... We have to investigate that.
@vfdev-5
You're right, it only shows up in pip build, not pure cmake build.
Didn't notice that before.
Could you please detail this point.
That could be -fopenmp-simd; could be choices of iomp/gomp; could be iomp from llvm or from intel; or even a mix of these.
OpenMP can have many incompatible configurations.
@xkszltl do you think we should we revert #3006 due to potential conflicts with the OpenMP used by PyTorch?
Once fixed in pytorch, it should definitely be reverted.
Before that, it could be either kept as short-term workaround, or could be removed so that at least it's possible to manually provide only the correct flag when there's conflict.
I slightly lean toward the later, but no strong opinion.
@vfdev-5 I think having the warning for not using openmp (but still working fine in cases where there could be clashes) is better, so I would be ok reverting the PR.
@fmassa sounds good!
@vfdev-5
You're right, it only shows up in pip build, not pure cmake build.
Didn't notice that before.Could you please detail this point.
That could be
-fopenmp-simd; could be choices of iomp/gomp; could be iomp from llvm or from intel; or even a mix of these.
OpenMP can have many incompatible configurations.
Recently, in torchaudio we realized that compiling against GNU with -fopenmp flag causes segmentation fault. We would love to learn the correct way to link against the version of OpenMP that PyTorch uses.
I don't know.
It's also easy to have an accidental mix of intel iomp/gomp when mkl is there.
Technically cmake dependency should handle that properly.
Most helpful comment
@xkszltl i'll try to take a look at this issue this week.