Conan: Build toolchain POC

Created on 12 Sep 2019  Â·  31Comments  Â·  Source: conan-io/conan

We want to develop in a POC (as lean as possible, just to clarify where we want to go) that consists in:

  • A new method def toolchain(self) for the recipe that:

    • Will look like:

      def toolchain(self):
        tc = CMakeToolchain(self)
        tc.definitions["BobSponge"] = "ON"
        return tc
      
    • Will be called by Conan in the install process and will generate a conan_toolchain.cmake file. That way, after an install, we have all the information that we need and we could potentially call a conan build without losing information.

    • That file should take care of adjusting everything but the dependencies information. That is:
      - Compiler check (mismatch version etc).
      - std, libcxx, fpic, rpath
      - Any other thing that depends on the settings and the user fine adjustments in the CMakeToolchain.

    • The method will have available new "toolchain helpers" that will replicate or move functionality from the build helpers.

  • The new build helpers (e.g: CMake()):

    • Will be dummy, just calling CMake injecting the conan_toolchain.cmake file. The most common way is by using -DCMAKE_TOOLCHAIN_FILE. See #2683 and analyze more alternatives.
    • Different import namespace? or better a dual behavior depending on the existence of the conan_toolchain.cmake? (Pre 2.0? only 2.0?)
    • Think about how it should behave: Should it do it everything automatically or declare variables and macros and then allow to manually activate them? I think probably the toolchain information should be automatically applied, being the "generators" files to be optionally included. Read more below.
    • Be careful, if you put a print in the toolchain, using -DCMAKE_TOOLCHAIN_FILE you will see that it is called several times, maybe even the compiler is not defined in the first call.
  • The generators generated files like (conanbuildinfo.cmake) should remove all the things not related to the dependencies.

  • Probably, (question) the new toolchain file could include (or not) the conanbuildinfo.cmake based on something declared in the CMakeToolchain() instance? Maybe you want to use only find package... maybe you want the conan targets... Try alternatives, think that maybe something has to be included AFTER the project? We need to try a lot of things here.
  • Should we start generating a new file (not conanbuildinfo.cmake) to don't break everything? Take into account that we might want a unique cmake generator for Conan 2.0 that generates all the find scripts files and the "global" one.

  • Start with CMake, follow with MSBuild

  • Analyze possible flaws and discuss with the team
medium high queue look into

Most helpful comment

Here is my scenario: we are doing embedded development using various compilers and producing multiple variants of the binaries (processors, compiler flags, ...). To make setting up the development environment easier and ensure that the same toolchain is also used for the CI (functional safety requirements) we put everything influencing the resulting binaries (CMake, Ninja, the compiler, ...) into Conan packages. To have different versions of Conan installed in parallel, we are using pipenv.

The major painpoint for the developers in this setup is that the IDE needs to be started from the respective shell environment (pipenv shell + conan virtualenv) which is cumbersome and means that switching between the variants (Conan profiles) requires closing the IDE, activating the environment of the other profile (deactivate, navigate to other build folder, activate) and start the IDE again.

My hope would be that this is handled by the IDE (plugin): User selects a Conan profile -> a separate build folder is created, conan install is executed and the respective virtualenv is used for building, running, debugging the created binaries.

All 31 comments

Some clarifications:

Will be called by Conan in the install process and will generate a conan_toolchain.cmake file. That way, after an install, we have all the information that we need and we could potentially call a conan build without losing information.

It also needs to create that file inside the cache, and use it by the build helper. It is not only for the conan install command in user space.

The generators generated files like (conanbuildinfo.cmake) should remove all the things not related to the dependencies.

I wouldn't touch the cmake generator, or the conanbuildinfo.cmake at all. If proposing something in code, should be ortogonal, the system can create another conan_deps.cmake file to complement the toolchain one

Other issues:

  • Please also consider implications for the multi-config generators, like VS, should it be just one toolchain file for them, I guess? And 2 toolchain files for the single-config generators?
  • Please consider what happens with the environment too, env-vars that might be defined in packages, profiles, etc. CMake allows some set(ENV{xxxx} .... ) syntax. But maybe Conan virtualenvs is a more general approach to cover other build systems too?

Another problem with the "generated toolchain file" approach is the "chicken-and-egg" problem with CMake. If CMake itself is installed via Conan, how is it located?
I also tend to favor simply using the scripts provided by virtualenv. This also ensures that there are no differences/surprises between the IDE and the "manual" shell environment.

Another problem with the "generated toolchain file" approach is the "chicken-and-egg" problem with CMake. If CMake itself is installed via Conan, how is it located?

Yes, this is a good point, are you aware of more cases like this? I see the toolchains mostly as the compiler definitions ,compiler flags, architectures... those things might make more sense to users to be defined in a cmake toolchain file. Not sure if it is even possible to define that somehow with environment, and cmake will take all of that from the environment, is it possible?

At a very, very, very early stage on this POC I also see the need of having a _toolchain_ and some kind of _environment_. In this preliminary example I'm building an application using _cmake_ as build_require. It works with a new _toolchain_ file and the current Conan activate/deactivate scripts for the virtualenv.

From the user perspective it would be something like:

mkdir build && cd build
conan install ..
source activate.sh  # <- I need to activate the virtualenv to add my CMake to the PATH
cmake .. -DCMAKE_TOOLCHAIN_FILE=conan.cmake
cmake --build .
...
source deactivate.sh

which gets the same result as the current local workflow (and the same as cache-related commands)

mkdir build && cd build
conan install ..
conan build ..

It could be possible to set the environment variables in the toolchain file, but the PATH one, for sure, should be set before calling CMake. And possibly, from what I've heard around the plugins, it might be needed to set the environment before running some IDEs in order to get the environment variables there (path to tools, path to DLLs to debug/run,...).

As @memsharded said, we need to gather information about more scenarios where the environment from Conan is used, maybe a walk around existing recipes could give us more hints.

Work in progress 👷

Here is my scenario: we are doing embedded development using various compilers and producing multiple variants of the binaries (processors, compiler flags, ...). To make setting up the development environment easier and ensure that the same toolchain is also used for the CI (functional safety requirements) we put everything influencing the resulting binaries (CMake, Ninja, the compiler, ...) into Conan packages. To have different versions of Conan installed in parallel, we are using pipenv.

The major painpoint for the developers in this setup is that the IDE needs to be started from the respective shell environment (pipenv shell + conan virtualenv) which is cumbersome and means that switching between the variants (Conan profiles) requires closing the IDE, activating the environment of the other profile (deactivate, navigate to other build folder, activate) and start the IDE again.

My hope would be that this is handled by the IDE (plugin): User selects a Conan profile -> a separate build folder is created, conan install is executed and the respective virtualenv is used for building, running, debugging the created binaries.

Some new thoughts:

  • What to do with environment, build-requires like cmake itself, seems the most challenging thing. Creating "virtualenv.xxx" activate/deactivate scripts seems a new point of friction, but for things like cmake itself would be totally necessary.
  • What would be the info for multi-config build systems like VS IDE. Right now the multi-config generators can manage deps info, but that would not work for the environment. Build-requires typically do not contain build_type, but that cannot be enforced or guaranteed.

We have been thinking about what we want from this POC for 1.20. Objectives in order:

  1. POC of a find_package_multi + toolchain => No conan basic setup, everything adjusted
  2. conan install + CMake build working locally
  3. Same for visual studio.
  4. (Only if 1.20 closed) Look into environment management. (generate the activate scripts now or later, how to manage... etc) This expected to be done next sprint.

Handling the environment could be mandatory. Not only because the cmake executable itself can be a build-require (we could forget about it right now), but it looks like that CMAKE_GENERATOR can only be set through the command-line or using an environment variable (or with the initial-cache or PreLoad files). Some of the _"Environment variables that control the build"_ may have the same problem, but I think that all the important ones can also be set using the toolchain.

Interesting links: docs by CMake (link), and people trying to make it work (link and link).

Also, we might need the environment to run/initialize vcvars if using generators like Ninja or NMake.

I am fine with the users calling cmake . -G "<Generator>" explicitly. That is indeed the most common usage of the tool, and many users will expect to do it. Yes, it needs to match the declared environment in the conan installed settings, but it is their responsibility.

It seems that using another file to pre-populate the cache, with the -C argument is a possibility, I find it interesting too.

I have never had to activate any virtual environment in either Linux or Windows to be able to run my cmake builds. I really want to have this possibility with Conan, even if it is only for many cases, but not all of them. And virtualenvs are used estrictly when there is no other possible way.

We just posted a new issues containing a process we have been developing (and writing about) for some time now in #6361

This process is notable trying to find alternatives to conan_basic_setup(). Is it realistic to hope that the toolchain can handle all the Conan settings translation to CMake?

Hi, @Adnn Really interesting reading that "Modern C++ Component", thanks a lot for sharing.

The toolchain we are developing in #5919 is mostly about CMake and cmake_find_package[_multi] generators, we think that our generated FindXXX (or XXXConfig) have several advantages (transitive requires and so on), that's the reason we cannot be satisfied enough just with the cmake_paths generator.

I'm quite confident that the first iteration of the toolchain can be merged for the following release (not 100% sure because we have a couple of events these weeks, but we'll try to), and the idea is that it will be able to translate all the Conan settings into CMake variables. You can have a look at the tests in that PR (files under conans/test/functional/toolchain), they are checking different settings, populate the CMake cache with the proper variables, and they will produce the requested binary.

Nevertheless this approach has some limitations:

  • changing the CMAKE_GENERATOR_NAME requires using the command line or an environment variable (the user will need to activate the virtualenv generator)
  • variables that depend on the CONFIG need to be provided in generator expressions and these are only evaluated if included in a file added to the CMAKE_PROJECT_INCLUDE variable. We are doing that, but it is only available for modern CMake

So yes, I really encourage you to try that PR and provide feedback, if you have any problem understanding the tests or running it locally, please ask me and I'll be pleased to guide you through it.

Thanks!

Just a a note some IDEs allow you to change some environment variables, for example I use vscode with a generator for vscode. It's by no means perfect, but it provides an alternative to virtualenvs/toolchain file. I also know there was something similar built by pepe82sh which might have some inspiration.

It is also good to note that some environment variables cannot be set using cmake. For example the QNX compiler needs QNX_HOST and QNX_TARGET env variables defined (sysroot and such). These cannot be set by cmake but need to be set outside cmake.

Two general features we've just discussed adding to every toolchain. For those toolchains which are currently in-progress, if possible and feasible, look into adding these two features.

include/import
A first class member function for "including" other build system files (probably a list).
So, for
cmake: include(foo.cmake)
make: include foo.make
msbuild: <Import Project="foo.props" />

This made obvious sense to @memsharded and I and seems easy to implement.

filename override
We've already seen lots of requests about overriding filenames. Whenever generated fles are involved, it seems inevitable that there will be valid use cases for users to override the file names. In this case, we already know of several clear and compelling enough cases beyond users just wanting to control the name.

  1. For MSVC, multi-project solutions
    It's common that organizations have 1 solution with multiple related-but-separate projects, and to use a single conanfile.py to manage that solution. It is not uncommon for each project to have separate configurations which might align with different "settings" from conan toolchain.

  2. Multiple Makefile or CMakeLists.txt in subfolders.
    It's not uncommon for organizations to do something similar with other build systems. That is, have a single "large component project" with separate build system files. Sometimes, they have a super-build-system-file that can build all of them, but in terms of what they might need from Conan for toolchain, they might want to generate separate toolchain file for each sub-project.

In both cases above, it's now easy for users to instantiate multiple Toolchain() classes, one for each project, set the parameters accordingly, and generate with unique filenames. Obviously, for the common case where there is only one project, a default value can be provided making the parameter optional.

Implications for future Generators
Everything that has been said above for these two features we also discussed as valid points for future "Generators". @memsharded is currently working on POC for generators which are instantiated and generate files in the toolchain() method just like toolchains.

include/import
A first class member function for "including" other build system files (probably a list).
So, for
cmake: include(foo.cmake)
make: include foo.make
msbuild: <Import Project="foo.props" />

A _purpose_ is needed here. This can be useful for the user to _add_ a custom toolchain that can override values configured by Conan, but it can also be useful to provide values that Conan toolchain will use (for example, adding flags to CXX_FLAGS_INIT variable). In the first case, the include will be the last line; in the second one, it will be closer to the beginning. Also, given CMake, we need to decide if this file should affect or not the try_compile calls. And, on top of this, I'd love the same logic for different build-system and probably not all of them offer the same flexibility as CMake does.

What use-case are we thinking here about?

What use-case are we thinking here about?

I cannot find the issue now, but we have already got this request from users. They have their own toolchain files, and want to use them. Not sure what information they have there, but it seems just including it was good enough for them, basically a limitation that you can only specify 1 toolchain file in the cmake command line.

A different approach would be to let the users provide their own template file for the toolchain so they can _play_ with the context provided by Conan as well (besides adding their own variables/definitions). This was an idea @maikelvdh and I were talking about a long time ago, it should fit any use case (including before, including after, changing some values,...).

Just exploring other approaches.

As @memsharded said, the most common use case is for users who want to include their own build system files.

A different approach would be to let the users provide their own template file for the toolchain so they can play with the context provided by Conan as well (besides adding their own variables/definitions). This was an idea @maikelvdh and I were talking about a long time ago, it should fit any use case (including before, including after, changing some values,...).

Possibly main reason is uniformity/consistency. You always pass the conan_toolchain file to the build system. In some scenarios, that file will include another file, for example for cross-building, mobile, etc, while in other cases there is no need for a custom toolchain file (native). Having the conan toolchain -> include others mechanism is an opt-in in any case, as long as we don't hardcode passing the conan toolchain to the build helpers, users can always make their toolchains include the conan one, no need to do anything special, right?

It is the same with a template: if you provide the template to be used it will be used, if nothing is provided Conan will fallback to the default/built-in one, no need to do anything special (both need the same inputs: the name of the user-provided toolchain file, or the name of the user-provided template file). The template allows more control, it lets you do things that are impossible by including a file.

For example, as it is right now in the CMake toolchain: using a custom template you can modify the lines that affect compiler features detection, this is something that won't be possible including a file at the end.

That's the reason why I was asking for the use-cases we try to cover, and those we won't support.

It is the same with a template: if you provide the template to be used it will be used, if nothing is provided Conan will fallback to the default/built-in one, no need to do anything special (both need the same inputs: the name of the user-provided toolchain file, or the name of the user-provided template file). The template allows more control, it lets you do things that are impossible by including a file.

I am not sure if I follow, maybe we are talking about different implementations for the same feature. Let's say we have this common scenario:

User have some QNX myproprietary_qcc_toolchain.cmake, that they want to use in conjuntion with the Conan provided toolchain. They could have their own toolchain include the conan one or viceversa. We are talking here the use case of the Conan toolchain including the user one.

One possible implementation is in the recipe/toolchain python code something like:

def toolchain(self):
     tc = CMakeToolchain(self)
     if self.settings.compiler == "qcc":
         tc.include("myproprietary_qcc_toolchain.cmake")
    tc.write_toolchain_files()

The other possible implementation, IIUC, using templates could be:

def toolchain(self):
     tc = CMakeToolchain(self)
     if self.settings.compiler == "qcc":
         qcc_tmpl = """ { extends base-toolchain-template }
                        include("myproprietary_qcc_toolchain.cmake")
                  """
        tc.template = qcc_tmpl 
    tc.write_toolchain_files()

Are these the alternatives we are talking about?

The one with the template would be more like (the template is not a string, but an actual file):

def toolchain(self):
     tc = CMakeToolchain(self)
     if self.settings.compiler == "qcc":
         tc.include("toolchain/qcc.cmake")
    tc.write_toolchain_files()

The advantage of using a template is that you can have more flexibility:

  • include something at some point in the middle (maybe it is at the end):

    {% extends 'toolchain/cross-building.cmake' %}
    
    {% block project_config %}
    {{ super() }}
    include("my-toolchain.cmake")
    {% endblock %}
    
  • add something to the compiler_features config:

    {% extends 'toolchain/cross-building.cmake' %}
    
    {% block compiler_features_config %}
    {{ super() }}
    set(CXX_FLAGS "flag I need in the try-compile blocks")
    {% endblock %}
    
  • remove everything from the project_config:

    {% extends 'toolchain/cross-building.cmake' %}
    
    {% block project_config %}
    {% endblock %}
    

Sorry but I dont understand the proposal yet.
Let's say that users have a qcc_toolchain.cmake that contains something like this, and that they would ideally want to maintain as-is:

set(CMAKE_CXX_FLAGS -qcxx=someflag -qcc=myvalue)
set(CMAKE_LINKER_FLAGS --linker=qlinker --option=link-super)

Lets assume, for the sake of simplicity that this file is in the source repo, together with the conanfile.py recipe and the source code.
What would be the proposal for integrating it?
The "include" one would be:

def toolchain(self):
     tc = CMakeToolchain(self)
     if self.settings.compiler == "qcc":
         tc.include("qcc_toolchain.cmake")
    tc.write_toolchain_files()

And that would generate something in the conan_toolchain.cmake like:

include("qcc_toolchain.cmake")
...

Could you please explain your proposal?

The template will look like (let's name this file qcc_toolchain.cmake):

{% extends 'toolchain/cross-building.cmake' %}  # The user can also choose the template to inherit

{% block project_config %}  # Now it is called 'main'
{{ super() }}
set(CMAKE_CXX_FLAGS -qcxx=someflag -qcc=myvalue)
set(CMAKE_LINKER_FLAGS --linker=qlinker --option=link-super)
{% endblock %}

And the recipe should consider this template file:

def toolchain(self):
     tc = CMakeToolchain(self)
     if self.settings.compiler == "qcc":
         tc.template_name = "qcc_toolchain.cmake"
    tc.write_toolchain_files()

One advantage (sometimes it is important): the user can choose where to put those set definitions, maybe the hardcoded place in your proposal is not the place inside the file where they should be included.

The user might want to set those variables so they are considered for the try-require calls, and they decides to add those lines to a different block. This template will render something completely different:

{% extends 'toolchain/cross-building.cmake' %}  # The user can also choose the template to inherit

{% block compiler_features_config %}  # Now it is called 'before_try_compile'
{{ super() }}
set(CMAKE_CXX_FLAGS -qcxx=someflag -qcc=myvalue)
set(CMAKE_LINKER_FLAGS --linker=qlinker --option=link-super)
{% endblock %}

Ok, now it is more clear, thanks. The key is that it is tc.template_name = "qcc_toolchain.cmake", and not tc.include().

I guess the users could do an extra include("myreal_qcc_toolchain.cmake") to include a pure cmake toolchain file, maybe provided by a vendor or just something they don't want to modify.

I agree. jinja templates are very powerful and flexible and they can probably solve some difficult cases which will come up (or have already). At the same time, there should still be simple support for the simple case of include("myreal_qcc_toolchain.cmake") like @memsharded showed.

     if self.settings.compiler == "qcc":
         tc.include("qcc_toolchain.cmake")

Ok, now back to my original question: what is the use case we want to cover? what are the use-cases we cannot cover?

The Conan toolchain assigns values to _every_ CMake flag trying to create an actual toolchain, Conan configures the compiler and add all the flags needed according to the settings. Then we will allow the user to _include_ another file before (or after? please, let me know where to include this file in the CMake toolchain) that will add/set more or different or fewer settings and flags... both toolchains are not going to play well together always. Users will need to modify the toolchain they already have or the toolchain provided by the vendor, it is not a plug/include & play.

Then, if you already need to modify the toolchain you have, I just wonder which approach is better: create a new straightforward toolchain using values provided by Conan and your own values:

  • Use the toolchain provided by the vendor: this works out of the box

    if self.settings.compiler == "qcc":
         tc.template_name = "vendor_toolchain.cmake"
    
    • A template that uses values provided by Conan and values _assigned_ by the user:
    if self.settings.compiler == "qcc":
         tc.template_name = "toolchain_template.cmake"
    

    toolchain-template.cmake (not every template extends or override blocks)

    # Adapting my toolchain to Conan workflow
    cmake_minimum_required(3.15)
    
    # Conan provides context with useful values
    set(CMAKE_BUILD_TYPE "{{ build_type }}" CACHE STRING "Choose the type of build." FORCE)
    set(CMAKE_EXPORT_NO_PACKAGE_REGISTRY ON)
    set(CMAKE_MODULE_PATH {{ cmake_module_path }} ${CMAKE_MODULE_PATH})
    set(CMAKE_PREFIX_PATH {{ cmake_prefix_path }} ${CMAKE_PREFIX_PATH})
    set(CMAKE_INSTALL_PREFIX "{{install_prefix}}" CACHE STRING "" FORCE)
    
    # Now values I want to hardcode
    set(CMAKE_POSITION_INDEPENDENT_CODE ON)
    set(CMAKE_CXX_STANDARD 17)
    set(CMAKE_CXX_EXTENSIONS {{ cppstd_extensions }})
    
    set(CMAKE_CXX_FLAGS "x86 myflag")
    set(CMAKE_C_FLAGS "x86 mycflag")
    
    • More complex templates using extends and block

With an include(other-file.cmake) approach:

  • it is impossible to reproduce the first scenario, we will need to add another feature to the CMakeToolchain to take the name of the toolchain to use (not include).
  • About the second scenario it becomes much more complex, the included file will need to _unset_ variables that the Conan toolchain is assigning (and this is not for sure the toolchain I already have or the one provided by a vendor):

other-file.cmake (included at the end of the Conan toolchain)
```cmake
# This toolchain is included after the Conan one
cmake_minimum_required(3.15)

# Get rid of some variables defined by Conan toolchain that I don't want/support
unset(CMAKE_PROJECT_INCLUDE)
unset(CMAKE_GENERATOR_TOOLSET)
unset....

#Inject something I want to use in the compiler feature checks...
!!! I cannot do it because I already returned from here:
!!! https://github.com/conan-io/conan/blob/09349b50472c50012d38777ac0330db6a7b9a167/conans/client/toolchain/cmake/base.py#L80

# Now assign my values
set(CMAKE_POSITION_INDEPENDENT_CODE ON)
set(CMAKE_CXX_STANDARD 17)
set(CMAKE_CXX_FLAGS "x86 myflag")
set(CMAKE_C_FLAGS "x86 mycflag")
```

For sure, adding an include to the bottom of our toolchain is more convenient for some use-cases, but it is not suitable for many others. That's the reason why I'm asking about the use-cases we want to cover. Maybe it is better to add a feature that is a bit more complex but can cover _all_ the possible use-cases instead of adding several features that will allow the user to achieve the same goal using different paths.

@jgsogo There are simple cases where users want to include build system files with arbitrary variables that do not conflict with anything the Conan toolchain would set. In many of those cases, and the include or import location might be irrelevant. With that said, it's clear that you've thought about this a lot, so I'll defer to your judgement on the feature moving forward.

lemme summarize a bit how I see it.

  1. base scenario - native compilation: just use generic toolchain. (generic toolchain defines common variables which are platform agnostic and always needed)
  2. cross-building (e.g. iOS): platform toolchain inherits from generic toolchain. (platform toolchain defines variables which are specific to the target platform)
  3. cross-building with base vendor toolchain (e.g. Android, Emscripten - platform SDK provides its own toolchain we're basing on): platform toolchain -> generic toolchain -> vendor toolchain (same as 2., but we include vendor toolchain)

the overall structure inside the resulting toolchain would be like:

# define platform-specific variables, for instance
set(CMAKE_SYSTEM_NAME Android)
set(ANDROID_ABI armv7a)
...

# include vendor toolchain
include(${ANDROID_NDK}/build/cmake/android.toolchain.cmake)

# define common, generic, platform-agnostic variables
set(CMAKE_CXX_FLAGS ${CONAN_CXX_FLAGS})
...

from the user's perspective I'd like the ability to define vendor toolchain file:

def toolchain(self):
     tc = CMakeToolchain(self)
     tc.base_toolchain = "qcc_toolchain.cmake"
     tc.write_toolchain_files()

I don't think we need an include method here - do we expect multiple vendor toolchains to be simultaneously included?

@SSE4

A bit confusing too, as you are mixing qcc and android toolchian in the example. Do you mean that tc.base_toolchain = "qcc_toolchain.cmake" will generate automatically some include("qcc_toolchain.cmake") inside generated conan_toolchain.cmake?

@memsharded yes, exactly, it will generate include directive for the base
vendor toolchain needed (it could be android , emscripten, qnx or something
else)

On Wed, 21 Oct 2020 at 14:18, James notifications@github.com wrote:

@SSE4 https://github.com/SSE4

A bit confusing too, as you are mixing qcc and android toolchian in the
example. Do you mean that tc.base_toolchain = "qcc_toolchain.cmake" will
generate automatically some include("qcc_toolchain.cmake") inside
generated conan_toolchain.cmake?

—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/conan-io/conan/issues/5737#issuecomment-713527746,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AAGUOXDPGQ6HKJXQJK5PRFLSL3GRLANCNFSM4IWAPQIQ
.

This feature was becoming very complex and it caused me to go back to first-principles and reconsider what we're trying to achieve. Had significant realization, here's the braindump:

  • toolchains (and generators) currently have pretty "closed" scope in terms of user-control
  • we start out exposing preprocessor defs and build system variables via the class/API to users
  • everyone seems to agree that this makes sense
  • i suggested adding another extensibility point, includes()
  • this spawned the suggestion of making the "API" a jinja template for maximum extensibility
  • this spawned more discussion about logistics of user-provided templates
  • this spawned the realization that we would one-day require new user-provided classes for new platforms
  • this spawned the realization that this could be handled via python_requires
  • despite being based on a sensible premise, this is starting to feel like an overwhelming amount of complexity

Finally, this spawned the memory that we already have "maximum extensibility" in the toolchains feature, wherein users can implement their own toolchain classes and distribute them via python_requires. So, compared to that, the templates provide maximum extensibility of "the built-in toolchains we choose to provide and support, without having to manage your own toolchain". The template approach still might be worth doing, since it's fairly annoying to manage a custom toolchain class as a python requires if you just want to change one or two little things from the built-in one. However, maybe remembering that users can implement their own toolchain methods can help us draw a more reasonable line about where we stop extensibility of the templates.

Was this page helpful?
0 / 5 - 0 ratings