Suggested:
intel:
version: ["17.0.0", "17.0.1", "17.0.2", "17.0.3", "17.0.4", "17.0.5", "17.0.6", "17.0.7", "17.0.8",
"18.0.0", "18.0.1", "18.0.2", "18.0.3", "18.0.4",
"19.0.0", "19.0.1", "19.0.2", "19.0.3"]
libcxx: [libstdc++, libstdc++11] # Linux only
runtime: [MD, MT, MTd, MDd] # Windows only
cppstd: [None, 98, gnu98, 11, gnu11, 14, gnu14, 17, gnu17, 20, gnu20]
To investigate:
Seems the Intel C++ compiler is compatible with both GCC and Visual Studio with some considerations:
GCC support is declared to be compatible with "most versions" https://software.intel.com/en-us/cpp-compiler-developer-guide-and-reference-gcc-compatibility-and-interoperability without any other major consideration apart from optimization flags.
C language object files created with the Intel庐 C++ Compiler are binary compatible with gcc* and C/C++ language library. You can use the Intel 庐 C++ Compiler or the gcc* compiler to pass object files to the linker.
Visual Studio support is declared to be compatible with VS 2013, 2015 https://software.intel.com/en-us/cpp-compiler-developer-guide-and-reference-microsoft-compatibility and probably 2017 too as declared in the portability page https://software.intel.com/en-us/cpp-compiler-developer-guide-and-reference-overview-porting-from-the-microsoft-compiler-to-the-intel-c-compiler. However, many features like preprocessor directives or keywords are not supported and this will reduce the compatibility of this compiler with libraries developed for VS.
As said here, Intel compiler only supports C++11 standard at most: https://software.intel.com/en-us/cpp-compiler-developer-guide-and-reference-conformance-to-the-c-c-standards but this seems to be outdated information, as it really supports features of C++17 (haven't found anything about C++20): https://software.intel.com/en-us/articles/c17-features-supported-by-intel-c-compiler
Some of the supported features in the standard change in patch versions of the compiler (see for example the C++17 link above): The Template argument deduction for class templates was not supported in Intel 19.0.0 and it is in 19.0.1
The Intel Fortran Compiler allows interoperability with C code (no libcxx) https://software.intel.com/en-us/fortran-compiler-developer-guide-and-reference-standard-fortran-and-c-interoperability and is compatible with Intel C++ Compiler and Visual Studio or GCC
On the Linux side one thing to consider is the version of libstdc++ that the Intel compiler will use.
By default the Intel compiler uses the standard library that it finds from the local GCC install.
If the libstdc++ version isn't tracked then it will be hard to ensure compatibility between builds labeled with the same settings.
It is also worth noting that the Intel build will normally be suitable for linking together with libraries built with GCC, as long as the same standard library version is used.
So thinking out loud a bit gives the below two proposals:
Add gcc_version:
intel:
version: ["17.0.0", "17.0.1", "17.0.2", "17.0.3", "17.0.4", "17.0.5", "17.0.6", "17.0.7", "17.0.8",
"18.0.0", "18.0.1", "18.0.2", "18.0.3", "18.0.4",
"19.0.0", "19.0.1", "19.0.2", "19.0.3"]
gcc_version: ["4.1", "4.4", "4.5", "4.6", "4.7", "4.8", "4.9",
"5", "5.1", "5.2", "5.3", "5.4", "5.5",
"6", "6.1", "6.2", "6.3", "6.4",
"7", "7.1", "7.2", "7.3",
"8", "8.1", "8.2", "8.3",
"9", "9.1"] # Linux only
libcxx: [libstdc++, libstdc++11] # Linux only
runtime: [MD, MT, MTd, MDd] # Windows only
cppstd: [None, 98, gnu98, 11, gnu11, 14, gnu14, 17, gnu17, 20, gnu20]
Treat existing compiler entry as source of libstdc++ and add other_compiler entry:
gcc:
version: ["4.1", "4.4", "4.5", "4.6", "4.7", "4.8", "4.9",
"5", "5.1", "5.2", "5.3", "5.4", "5.5",
"6", "6.1", "6.2", "6.3", "6.4",
"7", "7.1", "7.2", "7.3",
"8", "8.1", "8.2", "8.3",
"9", "9.1"]
libcxx: [libstdc++, libstdc++11]
threads: [None, posix, win32] # Windows MinGW
exception: [None, dwarf2, sjlj, seh] # Windows MinGW
cppstd: [None, 98, gnu98, 11, gnu11, 14, gnu14, 17, gnu17, 20, gnu20]
other_compiler:
None:
intel:
version: ["17.0.0", "17.0.1", "17.0.2", "17.0.3", "17.0.4", "17.0.5", "17.0.6", "17.0.7", "17.0.8",
"18.0.0", "18.0.1", "18.0.2", "18.0.3", "18.0.4",
"19.0.0", "19.0.1", "19.0.2", "19.0.3"]
FYI, we extensively use the intel compiler with conan, and it's binary compatibility with gcc & msvc is very important to us (we have just under 100 packages, and about a third are built with the intel compiler). We ended up using yaml anchors in our settings.yml:
compiler:
gcc: &gcc
version: ...
...
Visual Studio: &msvc
...
intel:
version: ["16.0", "17.0", "18.0", "19.0"]
base:
gcc:
<<: *gcc
threads: [None]
exception: [None]
Visual Studio:
<<: *msvc
toolset: [None]
Having both the intel compiler version as well as the "base" compiler version is important, as the intel compiler tries (with varying levels of success) to emulate the base compiler. Using anchors made it easy for us to keep our settings.yml in sync with upstream conan, and generally caused few headaches as there was a single source of truth for the libstdcxx and runtime settings.
Describing it in this fashion made it relatively easy to make packages built with the intel compiler to share a package_id with that of the corresponding system compiler:
def package_id(self):
if self.info.full_settings.compiler == "intel":
# Unfortunately assigning values is shallow
base = self.info.settings.compiler.base
self.info.settings.compiler = (
base
) # So now self.info.settings.compiler is basically just a string
# Deep copy the rest
for field, value in base.as_list():
tokens = field.split(".")
attr = self.info.settings.compiler
for token in tokens[:-1]:
attr = getattr(attr, token)
setattr(attr, tokens[-1], value)
(We don't currently have MacOS as a target platform, and I'm not sure exactly what compiler the intel compiler tries to emulate their, probably apple-clang.)
As a followup, regarding fortran. We have a number of fortran packages that we manage with conan, all of which are built with ifort. I believe that gfortran provides binary compatibility with gcc as well, however, gfortran and ifort have their own rutimes, and so they might not be completely compatible with each other.
I would say that for now you should just focus on just handling the C/C++ components of the intel compiler, and think about fortran support in a separate issue.
As an aside, we are using conan to manage packages for other languages as well. We found it just easier to keep a single repository of packages, rather than a repository for each language that we use. Thanks to conan's unbiased build system approach, it has been straightforward to (for instance) integrate pip packages into conan (including ugly ones, that depend upon c and fortran, like numpy).
Thanks a lot for the feedback! We are trying to gather the important bits to include the intel C++ compiler in the settings and do not make a mess of settings in the _settings.yml_ file.
@peterSW thanks for pointing out the importance of tracking the version of the gcc compiler. Definitely, this seems something we have to model. The settings structure proposed by @ohanar makes sense to me and it tackles an important issue separating the visual runtime from the gcc libcxx.
I also like the idea of letting the users implement the compatibility with the base compiler in the package ID. The good thing is that the information that the package has been created with the intel compiler will be preserved as metadata, but the ID will be compatible with gcc. Thanks for sharing your solution!
Regarding the Fortran compiler, I agree we should treat this as a different issue and maybe discuss your approach there 馃槃
Thanks all for the feedback! Really useful.
I think the proposal of @ohanar makes sense, specially if we consider that we could add a None base for those who want to keep the intel binaries as totally distinct binaries with its own package_id. Even the pieces of the package_id() could be built-in for the intel compiler.
My major question at this point is the intel version <-> other compiler version compatibility. With the presented approach you get that a package compiled with Intel compiler will be usable from exactly one version of either gcc or msvc. Is this the general case? Is there somewhere a table or statament of this in the Intel docs? Wouldn't it be more general to have the base: gcc: version: None or some other mechanism to specify that it would be valid for any other compiler version?
My major question at this point is the intel version <-> other compiler version compatibility. With the presented approach you get that a package compiled with Intel compiler will be usable from exactly one version of either gcc or msvc. Is this the general case? Is there somewhere a table or statament of this in the Intel docs? Wouldn't it be more general to have the
base: gcc: version: Noneor some other mechanism to specify that it would be valid for any other compiler version?
For most packages I would expect that the base: gcc: version will have a bigger significance for compatibility than intel: version. That is because the Intel compiler aims for compatibility with the "base" compiler and uses its headers and libraries. The best documentation on this I know about is here: https://software.intel.com/en-us/cpp-compiler-developer-guide-and-reference-gcc-compatibility-and-interoperability
I think GGC's manual entry on "ABI poilicy and Guidelines" is also quite relevant:
https://gcc.gnu.org/onlinedocs/libstdc++/manual/abi.html
I think the proposal of @ohanar makes sense, specially if we consider that we could add a
Nonebase for those who want to keep the intel binaries as totally distinct binaries with its own package_id.
I don't really think this makes any sense, the intel compiler requires another compiler to already be installed on your system, and leverages that compiler while compiling. E.g. On windows you get the following error if cl.exe is not in PATH:
Intel(R) C++ Intel(R) 64 Compiler for applications running on Intel(R) 64, Version 19.0.4.245 Build 20190417
Copyright (C) 1985-2019 Intel Corporation. All rights reserved.
icl: error #10114: Microsoft Visual C++ not found in path
On Linux you get a similar error if gcc/g++ is not in PATH. If you don't mess with the package_id at all, you will already have binaries that are only compatible with the intel compiler, so I don't really see the need of having a None base (plus as I mentioned, it is non-nonsensical).
I think the best way to handle abi compatibility would be to add a couple of methods -- ConanInfo.intel_compatible, ConanInfo.intel_incompatible -- and decide on a default. IMO, the default should be abi compatibility with the base compiler, there have only been a couple exceptions to that rule in our usage.
Ok, understood, I didn't know that.
What I would like to model is the possibility for a package to have distinct binary packages for intel and gcc, and not one having a single package-ID and being compatible. We cannot force the compatibility without letting users to opt-out and define they want a real gcc binary and an intel one, and be able to consume and use both in some way, even if they are binary compatible, Conan should be able to manage them as different binaries. We need to think of some way to define this.
@memsharded I think that is perfectly viable by adding following:
class ConanInfo(object):
def __init__(...):
...
# default behaviour is for binaries built with the intel compiler to be
# compatible with the base compiler:
self.intel_compatible()
def intel_compatible(self):
# Basically what I put above in the package_id method
if self.full_settings.compiler != "intel":
return
# Unfortunately assigning values is shallow
self.settings.compiler = (
self.full_settings.compiler.base
) # So now self.settings.compiler is basically just a string
# Deep copy everything
for field, value in self.full_settings.compiler.base.as_list():
tokens = field.split(".")
attr = self.settings.compiler
for token in tokens[:-1]:
attr = getattr(attr, token)
setattr(attr, tokens[-1], value)
def intel_incompatible(self):
# Method to opt out of binary compatibility
if self.full_settings.compiler != "intel":
return
# Unfortunately assigning values is shallow
self.settings.compiler = (
self.full_settings.compiler
) # So now self.settings.compiler is basically just a string
# Deep copy everything
for field, value in self.full_settings.compiler.as_list():
tokens = field.split(".")
attr = self.settings.compiler
for token in tokens[:-1]:
attr = getattr(attr, token)
setattr(attr, tokens[-1], value)
Yes, sure, thanks very much for the suggestion, that would be a nice approach indeed.
I am still trying to go one step further and thinking of some way that allows to define that, and even more: to use the Intel one as a gcc compatible one, or to opt-in to use the Gcc one, without needing to change the recipe.
I didn't notice the consume and use both part. I don't really know how you can do that without a package being able to override its dependencies' settings (which last I checked, you couldn't do, unlike options). I don't see how having a None base would escape that reality either.
Ok, I think we should proceed with at least a PoC. My proposal is:
base with values "gcc", "msvc" as possibilities. They include their versions and basic configuration. No need to use yaml substitution, and then invalidating with [None], just a copy of the necessary bitspackage_id maps to the one defined by the "base" compiler (compatible by default).ignore = [None, True] setting can be added to the "base" one. When True, a new binary package-ID will be computing hashing the Intel values too, adding them to the "base" values.I think that basic support for CMake and Visual Studio build helpers should be provided.
I'll start working on a pull request.
- Intel compiler has
basewith values "gcc", "msvc" as possibilities. They include their versions and basic configuration. No need to use yaml substitution, and then invalidating with [None], just a copy of the necessary bits
I'm pretty sure it needs to be "Visual Studio" not "msvc" to actually produce compatible ids.
Also, the downside of not using yaml substitutions is additional maintenance, every time you make changes to gcc, you would also have to make sure to check the intel compiler, and in practice less frequently used features, like this one, get missed a lot (I'm speaking from experience). For what its worth, it isn't strictly necessary to invalidate parts of the settings, as conan already allows for clearly bogus settings (e.g. compiler=gcc, compiler.version=4.9, compiler.libcxx=libstdc++ or os=Linux, compiler=gcc, compiler.threads=win32).
- To be able to generate distinct binaries for the intel and the base compiler, a
ignore = [None, True]setting can be added to the "base" one. When True, a new binary package-ID will be computing hashing the Intel values too, adding them to the "base" values.
I'm a bit confused about what purpose this servers. I'm positive this isn't needed to have distinct package ids for intel vs base, and this also doesn't really enable a consumer package to specify whether or not they want an intel or base binary (see #3000).
I'll start working on a pull request.
Great, thanks for your offer. Please sync with @danimtb for details if necessary.
I'm pretty sure it needs to be "Visual Studio" not "msvc" to actually produce compatible ids.
Yes, it was a shortcut, it is better to keep it consistent. Though it is not problem to have a different value (if we wanted to make it it consistent with the new compiler setting that we are considering to add for Visual Studio, to use the toolset version instead of the IDE version), as we will internally process it, it could be replaced. But yeah, lets use "Visual Studio".
Also, the downside of not using yaml substitutions is additional maintenance, every time you make changes to gcc, you would also have to make sure to check the intel compiler, and in practice less frequently used features, like this one, get missed a lot (I'm speaking from experience).
Yes, I am not opposed to use substitutions.
I'm a bit confused about what purpose this servers. I'm positive this isn't needed to have distinct package ids for intel vs base, and this also doesn't really enable a consumer package to specify whether or not they want an intel or base binary
The use case would be someone that wants to create binaries both for intel compiler and for the gcc equivalent, and use them in different situations. Being compatible doesn't always mean that it is the same binary with the same characteristics. For example, to be able to benchmark those 2 binaries. Then it is totally necessary to be able to generate 2 different package IDs for those 2 different artifacts. I am fine with the default being compatible (which by the way, it means that if you build the package with intel and then later in time you build again with the equivalent "base" compiler, you will get the same package-ID and the binary will be replaced), but I think we should leave an opt-in for users wanting to be able to manage different binaries. They way would be:
$ conan create . pkg/0.1@user/channel -s compiler=intel -s compiler.base=gcc -s compiler.base.ignore=True
# generates package with package-id=ID1
$ conan create . pkg/0.1@user/channel -s compiler=gcc ...
# generates package with package-id=ID2
# consumer with a requires=pkg/0.1@user/channel
$ conan install . -s compiler=gcc ... # will resolve to package ID2
$ conan install . -s compiler=gcc -s pkg:compiler=intel -s pkg:compiler.base=gcc -s pkg:compiler.base.ignore=True... # should resolve to package ID1
This should work, but the truth is that I find not very UI/UX friendly, needing to specify the whole set of "base" subsettings. We might try to improve the way to define that, specially in profiles:
compiler=gcc
compiler.version=4.9
compiler.libcxx=libstdc++
pkg:compiler=intel
pkg:compiler.version=17.1
pkg:compiler.base=gcc
pkg:compiler.base.version=4.9
pkg:compiler.base.libcxx=libstdc++
pkg:compiler.base.ignore=True
Please tell me if the above makes sense.
I have opened a new PR adding the settings and the compatibility methods for package ID here: https://github.com/conan-io/conan/pull/5626
Feel free to review or add any suggestions.
@memsharded Yes, I think I finally understand what you are saying.
I agree that the UX isn't very friendly. For us it hasn't really been too big of an issue as either things are running in a CI, or we are calling conan from cmake and extracting everything from cmake's knowledge.
Some updates on the different iterations done to implement this feature:
2nd PR #5770: Add intel compiler as a subsetting of the supported ones.
compiler.intel.compatible VS compiler.intel.incompatibleAfter some internal discussions, we realized that the current settings model is now enough to model this "compatibility" of packages and something else is needed regarding the compatibility of the package IDs, like:
def package_id(self):
if self.settings.compiler == "intel":
info = self.info.copy()
info.settings.compiler. = self.settings.compiler.base
info.settings.compiler.version = self.settings.base.version
...
self.compatible_ids.append(info)
Related to #5837
We will continue the development to support of the Intel compiler in the line of the package ID compatibility although it might require some more time.
Thanks 馃槃
Most helpful comment
As a followup, regarding fortran. We have a number of fortran packages that we manage with conan, all of which are built with ifort. I believe that gfortran provides binary compatibility with gcc as well, however, gfortran and ifort have their own rutimes, and so they might not be completely compatible with each other.
I would say that for now you should just focus on just handling the C/C++ components of the intel compiler, and think about fortran support in a separate issue.
As an aside, we are using conan to manage packages for other languages as well. We found it just easier to keep a single repository of packages, rather than a repository for each language that we use. Thanks to conan's unbiased build system approach, it has been straightforward to (for instance) integrate pip packages into conan (including ugly ones, that depend upon c and fortran, like numpy).