Conan: [workspace] Adding "conan-project" feature, to allow simultaneous edition of connected packages

Created on 10 Jun 2017  Â·  63Comments  Â·  Source: conan-io/conan

Everything in user space, linking in user space. How? Redirecting packages "rootfolder" to local folder, not to conan cache.

Most helpful comment

Here goes the first preview of this feature: https://github.com/memsharded/conan-project-example

It runs from my branch, from sources. It is still preliminary, tested for that example with cmake and VS (tried too with CLion and the VS toolchain and it worked too).

I will continue working on this but feedback very welcome!

All 63 comments

Hi, sounds interesting!
Would this also allow to build several changed connected packages incrementally? Or to build one big visual studio solution out of them? This would be very interesting for developers that have to change several packages simultaneously (e.g. some interface package and a dependent implementation package).

I am also searching for a solution to the "multi-package development" problem and was thinking of a CMake-based approach where the other packages are included as subdirectories instead of using their binary artifacts like
cmake -D DEV_PACKAGES=interfacelibfoo ../myapp
And then do a
add_subdirectory("../interfacelibfoo" "${CMAKE_CURRENT_BINARY_DIR}/interfacelibfoo_build")
In the CMakeLists.txt of the myapp package instead of find_package(interfacelibfoo).
This way all source files would be in the same project in the IDE and CMake would take care of building all dependencies of the build target i am working on.
But i haven't had the time to work on it yet.

Hi!

The solution, as we are starting to think about it, doesn't define how the local incremental build is done, because that would depend on the specific build system. The idea is that it will define which packages are in the project that are being under edition, so a conan install configure everything to link against the local copies instead of the cache ones.

But if you are using Visual Studio, you would need to create a Solution and add there the different Projects for the currently edited packages, or if you are using CMake you probably want to write a project CMakeLists.txt that defines that if you want to do it from the build system or IDE directly.

Conan could manage to build them in order, if you use conan commands, but I cannot think of something that would integrate cleanly with most of the existing build-systems, so it is transparent to the user. Conan is not a build-system, and it will never be, and it is not in the roadmap to be a project generator (in the way that cmake is). We will check approaches of generating the adequate information, in the same way conan can generate now files like conanbuildinfo.cmake to be used by build-systems.

Thanks very much for the feedback!

Hi, any update on this issue? is it going to be released soon?

@yosizelensky not yet, sorry. I'm sure we will start progressing on this very soon. Stay tuned and thanks for your interest.

Me and my colleagues worked out a way to implement this use case.
We wrote a little python script which generates a CMake file (as proposed from @petermbauer), which works quite good.
The downside is, that all recipes that are relevant have to be in userspace already. I achieved this by using a "Recipe Repository" which contains all our recipes.
On these I call the conan source step, to fetch all the sources. In the next step, a root project CMakeFile is generated which includes all subfolders.
However, it would be nice to see the native "conan-project" feature quite soon, because our solution cleary feels like a workaround.

@mschmid can you post a gist of this python script? Can it work for visual studio .props file too?

@yosizelensky Yes, I will post it tomorrow. It currently only generates a .cmake file, which is included in the CMakeLists.txt of the relevant package. From this we generate a visual studio solution with CMake, which is then pointing to sourcefiles in userspace. This allows for refactoring over several modules as well as incremental linking etc. It does not directly generate a .props file. Maybe generating a visual studio solution with CMake is also feasible for you?

@mschmid Can you expound on this a little further? Allow me to ask a few questions.

The downside is, that all recipes that are relevant have to be in userspace already. I achieved this by using a "Recipe Repository" which contains all our recipes.

Is this the same thing as the local conan repository that acts as a cache?

What are your recipes like? Do they fetch source from git or is the source snapshotted in the recipe with the recipe itself?

When you are co-developing a package, it sounds like you are modifying the source or build directories in your local conan repository. What happens if you have more than one project depends on that package? How do you isolate the package to make sure you aren't affecting other packages that happen to depend on it?

I think @memsharded has the right idea: packages you intend to co-develop need to be placed locally, not in the cache, to maintain the isolation.

What is your workflow in committing the code and updating the recipes? Do they reference some specific git change or the latest change on some branch?

Hi @kenfred,
no, my recipe repository is just a git repository which contains all our recipes - we currently do not keep them directly next to the source code. So the recipes need to fetch the source code from a git repo, as you assumed.

No, I am not changing the source code directly in the local conan cache, this would be bad practice.
My little script operates on the recipe repository in user space - it calls conan source there directly and adds these folders to a super-CMake file which includes all the source directories via add_subdirectory.
So the trick is, to get the necessary recipes into user space, to be able to fetch the source code of each module to user space.

@yosizelensky I just checked the script - its not general enough to post it here yet. I'm sorry. It contains to many specific paths and tweaks that are only valid for our project/setup. If I come up with a more general solution, I will post it here.

@mschmid Thanks for your reply. I'm trying to understand the details of your setup.

Here are questions:

  1. If your recipes are just local files (pulled from a git repo, not in your local conan cache nor a conan remote), how does a recipe locate other packages?
  2. What does your recipe look like? Does it pull the latest commit on the master branch always? Do you check it in with a hash hardcoded within?
  3. What is your workflow in developing two packages at once where one depends on the other?
  4. In your git repos that contain source (which you've said do not contain recipes) are you able to checkout any random commit and exactly reproduce the build? How to you make sure the recipes are unchanging relative to that commit?

@kenfred You are welcome!

  1. The recipe in the first place is a local file, correct. It has to be exported to the local cache as any other recipe, to work with it. You are right! Because I need the recipe also locally in user space to be able to call conan source in user space I do it this way.
  2. It checks out a specific hash, as discussed in #1516
  3. The workflow here would be to have both in userspace, a common CMake project over both of them, and then being able to work with both of them (common VS solution, refactoring, incremental building for development purposes etc.). If both are ready, the versions have to be bumped up and new packages have to be created.
  4. I am not able to build a random commit, but commits that have proved working together before. The idea is to get a fixed relation between version number and git hash. But you are right - The recipes could evolve and somehow diverge from the source code repo. I am thinking of putting the recipes in the source modules directly, as proposed in #1516 by @marco-m. However, then I need some way to check out different source modules together into one place to be able to work with them - without Conan - which seems odd.

@mschmid Thanks for the explanation! I'm still not fully grokking it, but that's okay.

I'm very interested to know how the Conan team's design in this area is shaping up. It's a complicated beast to support many different workflows.

My project has strict requirements that every commit on the master branch be reproducible. You can see more info in #447. I proposed one solution in that thread using a dual recipe technique. However, I realize now that its fatal flaw is that it breaks the full picture of transitive dependencies.

I suspect the only way to have exactly reproducible builds is to have unique recipes on the conan server for every commit. Or at least every commit that is a dependency for another package. This seems like recipe-overload, but I haven't discovered a way around it.

Once again, would love to hear what the Conan team is thinking for this "development flow" feature.

@memsharded, @lasote Do you have any estimation for when is this going to be released? This is really the only thing holding my company from using conan.
Thanks!

Not yet. We are reviewing all the local commands and local flows. Personally, I think that conan package_files command works well. Have you tried it?
I would like to hear from you your feedback, I've been outlining a blog post to capture feedback but it's not ready yet. So please, all, feel free to comment here the weak points or pains of the following flow:

  1. Call conan install to install the package dependencies (if there are) and capture the settings. You can use profiles to manage different configurations.
  2. Develop your package locally, even building with your IDE.
  3. Call conan package_files indicating your local build_folder. It will call the package() method of the recipe and will store the artifacts in the local cache. (You can call it with a profile too, but we will improve it to capture the settings from the generated conaninfo.txt and conanbuildinfo.txt files)

So you don't need to change anything in the packages depending on that package because all is read from the local cache.

You could do the same for all the libraries you are editing.

WDYT? Feedback please! @yosizelensky @mschmid

@lasote

If I understand correctly, this method will overwrite the package binaries in the local cache with the output of a local build. In the local cache, the build directory corresponding to a particular profile/configuration is modified, but there is no change to the source directory nor the recipe itself. Is that correct?

This sounds like you're corrupting your local cache. After running package_files, the binary in your package does not agree with the recipe or source in the local cache. This is especially problematic if you have multiple projects depending on the same package.

IMO, this speaks to a critical issue that should be addressed: the overloaded responsibilities of the local cache. Currently, the local cache is 1) a cache for stuff you pulled from a remote which optimizes fetch times and enables working offline and 2) a place where you stage packages you are developing before you upload them to a remote. I'm going to argue that the responsibility of creating and modifying packages (2) is at odds with caching (1) and creation/modification of packages should be in a project-local sandbox.

Gruesome details:
Using the local cache for creation/staging (2) is fairly safe if it is a new, unpublished package. However, if this is a published package you fetched from a remote or if you are using the package for more than one local project dependency, it is bad news to make modifications to it. In other words, most of the time, I should expect the local cache to be as immutable as my remotes. Therefore, I would suggest that the act of creation/staging (2) should be removed from the local cache, reducing the responsibility of the local cache to .... caching packages.

Here is how it would work and how it addresses this "conan project feature":
If you have used yarn or npm, you will be familiar with a folder called node_modules, where project dependencies are fetched to a folder within the project . This insulates projects from each other because dependencies are sandboxed within the project itself, not globally.

Of course, a global cache is still useful, especially when we're working with binary compilation, so I'd propose the local cache still exist and we employ some copy-on-write behavior. If you want to modify a package, you execute a conan command to bring a copy of the recipe to a project-local directory (similar to node_modules) for you to modify. When the package is ready for consumption, you update the local cache and remotes. The key being, updates to the local cache are as deliberate as updates to remotes.

Note, there have been many discussions discouraging in-place modifications of recipes at all. See #101 for discussions on immutable recipes. If you are in that camp, this idea of having a project-local copy of a package allows you to rev the package and update the cache and remotes with a new revision, rather than modifying an existing package, which is central to the idea of immutable packages, package revisioning, and workflows that require you to go back in time and have reproducible builds.

Hi @kenfred, thanks for your feedback.
I think that use the local cache both for cache retrieved packages and use in development was not critical because:

  • The local cache (by default) is in the user's home, in many cases, the user won't care about overriding local cache packages with development versions, because:

    • Probably the version or channel of the project being developed is different from the required by the other projects.

    • If it's the same channel and version I think the user will be glad depending on the package being developed.

  • You can have multiple local cache's, you can adjust you terminal with the environment variable CONAN_USER_HOME and then you will have a different cache, like a node_modules for you.
  • The package_files command requires the recipe to be exported, of course, you could ensure your recipe is the latest doing:

    • Call conan install to install the package dependencies (if there are) and capture the settings. You can use profiles to manage different configurations.
    • Develop your package locally, even building with your IDE.
    • Call conan export and conan package_files indicating your local build_folder. It will call the package() method of the recipe and will store the artifacts in the local cache. (You can call it with a profile too, but we will improve it to capture the settings from the generated conaninfo.txt and conanbuildinfo.txt files)
  • Finally, you typically won't push the development package to a conan repository, you will push the package to the CI and the CI will generate the stable packages if needed, and of course, will test the final projects against the generated dependencies.

  • Your local projects can depend now on the CI generated package (typically stable channel). Or keep using the development version.

Please! @yosizelensky @mschmid let's discuss all these ideas. WDYT?

Hi @lasote. Thanks for your response.

Here is a fictional use case that I think is under-served by the workflow you're proposing: Consider a company that hosts a private conan repository. The company has already released Project A and are working on Projects B and C. All three of them depend on Lib X, a package containing secret sauce that is continuously developed and improved.

The local cache (by default) is in the user's home, in many cases, the user won't care about overriding local cache packages with development versions, because:

  • Probably the version or channel of the project being developed is different from the required by the other projects.

Project A, which is already released, probably depends on a particular Lib X version (let's say v1.0), which has its own recipe. So Project A is not disturbed by ongoing development of Lib X.

Projects B and C, which are being developed along with Lib X, both depend on the evolving Lib X v1.1 - same version and channel. The scenario of multiple consumers depending on some common package that is also in development is common.

  • If it's the same channel and version I think the user will be glad depending on the package being developed.

If a developer is working on Projects B and C, certainly they would like to get latest updates to Lib X v1.1. However, those updates should be the completed commits that have gone through CI, not ones currently on the operating table for the other project. It is critical the Projects B and C have independent working copies of Lib X v1.1, if changes are being made.

You can have multiple local cache's, you can adjust you terminal with the environment variable CONAN_USER_HOME and then you will have a different cache, like a node_modules for you.

If you are in agreement that independent working copies for Projects B and C are important, the next thing to discuss is the workflow and user experience. IMO, manually setting up local caches per project and making sure I'm staying within the shell that has sourced the proper environment is fraught with opportunities for user error. It also feels like conan is considering this operation to be exceptional or infrequent, where I'd say the co-development use case is prevalent.

Maintaining multiple local caches also defeats many of the benefits of the local cache. If I must create unique cache, I need Project B to have copies of all packages it depends on, whether or not they are co-developed. So that long boost or Poco build I did for Project C can't be used for Project B. This is especially frustrating if I'm offline and I know Project C has what I need in its cache, but I didn't have the foresight to fetch it to the Project B cache. Only way around that would be to manually copy the files from one cache to another. Yuck. I'm not looking for another cache, I'm looking for a per-project sandbox for my package in development with the ability to use the common cache for other packages.

I believe conan is right to default to a single local cache with infrequent ability to make a new cache. That is how I'd expect a package cache to work. The rub comes in when you attempt to take that cache design and make it play double-duty as a working copy for an individual project: trying to serve both responsibilities weakens its original purpose. Instead, I would recommend introducing the project-local dependency folder that behaves as I described in my last post.

The package_files command requires the recipe to be exported, of course, you could ensure your recipe is the latest doing:

  • Call conan install to install the package dependencies (if there are) and capture the settings. You can use profiles to manage different configurations.
  • Develop your package locally, even building with your IDE.
  • Call conan export and conan package_files indicating your local build_folder. It will call the package() method of the recipe and will store the artifacts in the local cache. (You can call it with a profile too, but we will improve it to capture the settings from the generated conaninfo.txt and conanbuildinfo.txt files)

If I run conan install on both Projects B and C and then start to modify Lib X v1.1 in the context of Project B, it will be easy to forget and become confused why Project C isn't working. If I switch between packages, I have to remember to do package_files so I can be confident the cache has the correct binaries. Once again, I think this is too error-prone and too manual. I believe it is critical that Projects B and C do not share working copies of Lib X.

Finally, you typically won't push the development package to a conan repository, you will push the package to the CI and the CI will generate the stable packages if needed, and of course, will test the final projects against the generated dependencies.

Your local projects can depend now on the CI generated package (typically stable channel). Or keep using the development version.

Verifying the package with CI sounds like good practice. To me, this further highlights that what is in your local cache should reflect the verified packages, not the working packages. I must have a way for the verified version to exist in my local cache without having to overwrite it with my dev version. New packages should be sent directly to the CI server and not staged/exported into the local cache as an intermediate step to sending it to the CI server. Once they are verified and accepted, then I can fetch the verified package to my local cache, clean the project-local dev version of the package, and submit the dependent project to the CI server.

Thanks,
Ken

To expand on @kenfred's post, which I fully agree with, have a look at what Yarn has as a solution to this same problem (Yarn is similar to NPM, but newer): https://yarnpkg.com/en/docs/workspaces
Note that where they say 'workspace-a' etc what they mean is 'working directory for package-a'.
To add context from my perspective, a typical project is made up of around 14 internal libraries, of which at least 2-5 (or sometimes all of them) will require development changes to implement a feature request, so concurrent development of packages absolutely the norm. Further to that, I have around 20 such projects on my dev machine at present, most sharing at least two 'common' packages, which I will work on independently (ie I will work on different changes to the 'same' package within the context of each project in parallel stream), so to have that concurrent development _scoped_ to a given projects workspace is also critical.

I've opened another issue which describes workflow problems and potential solutions to this workflow. It is a high-level description of common day-to-day development.
https://github.com/conan-io/conan/issues/2491

I'd like to add a few changes to the issue:
I mention pointing to local sources in conanfile.txt. As @memsharded points out this is error prone. It may get committed. A more elegant solution would be in the build folder, in conaninfo.txt. I would like to emphasize having a way to fix this workflow using a text file as well as a command. Not just a command. The amount of libraries my colleagues and I have to deal with locally is quite large.

I describe a hypothetical option that would resemble:

[requires]
Bacon/1.0.0@me/stable
[source]
Bacon:source_url=~/Documents/code/Bacon

As discussed here, you fall into the issue of which commit/tag to checkout. I believe support for branches would fix this. It is also how you would develop a local library change in most cases. So the option would change slightly to:

[requires]
Bacon/My_Branch@me/stable
[source]
Bacon:source_url=~/Documents/code/Bacon

or

[requires]
Bacon/1.0.0@me/stable
[source]
Bacon:source_url=~/Documents/code/Bacon
Bacon:source_branch=My_Branch

# This translates to: use recipe for v1.0.0 but pull source from local dev directory ~/Documents/code/Bacon and checkout branch My_Branch @ HEAD.

Something along those lines. Checking out HEAD of a given branch. Obviously I am discussing at a high-level without regards to the implementation details, which would likely be massive.

I haven't tried package_files but I will. From what I read, it seems to take the inverse approach I would expect. I'd rather have an existing recipe grab sources from a local folder, than building a local copy and copying executables to cache.

Lets say I have software1 .. software5, each have a different branch in my local Bacon clone. Every time I update a branch I would have to run package_files. Would these packages conflict in my local cache? Instead if each software1 .. software5 project point to a different local branch, and "pull" HEAD when I build them, the issue seems disappear?

I have to think about this a little more, my 2 cents for now :)
Good day

I haven't tried package_files but I will.

package_files was deprecated before 1.0. It has been replaced by the export-pkg command, which does very the same with many improvements (and better naming)

Lets say I have software1 .. software5, each have a different branch in my local Bacon clone. Every time I update a branch I would have to run package_files. Would these packages conflict in my local cache?

No, if they are different packages, they will not conflict. You can also have in different places different copies or branches, and as long as you export-pkg them to different channels (for example, for different branches), or to different binary configurations (for example, you can have 2 different folders for debug/release and export-pkg each one, with different settings, so they will be 2 different binaries in the local cache, for the same package)

(This is transferred from a comment on #2491)

My use case is similar to @p-groarke's, but rather than referencing source urls, I would like to reference targets in the same CMake build configuration.

For example, I have a root CMakeLists.txt that uses add_subdirectory to expand out into a source tree several layers deep. Some nodes of the tree add library targets with namespaced aliases, for example, a library would have:

add_library(database-common ${SOURCE_FILES})
add_library(MyProj::DatabaseCommon ALIAS database-common)

In my use case I am editing my application and MyProj::DatabaseCommon simultaneously and they are in the same source tree. Instead of using CONAN_PKG::DatabaseCommon, I need to use MyProj::DatabaseCommon.

Note: I will not be building packages for DatabaseCommon while I am working on my application. That will happen in CI after I commit my changes.

My current workaround is to edit the application's CMakeLists.txt and point to MyProject::DatabaseCommon instead of CONAN_PKG::DatabaseCommon.

For example:

    set(MYPROJ_DATABASE_COMMON_LIB "MyProj::DatabaseCommon")
#   set(MYPROJ_DATABASE_COMMON_LIB "CONAN_PKG::database-common")
#   set(MYPROJ_SERVICE_CLIENT_LIB "MyProj::ServiceClient")
    set(MYPROJ_SERVICE_CLIENT_LIB "CONAN_PKG::service-client")

    target_link_libraries(service-test
            ${MYPROJ_DATABASE_COMMON_LIB}
            ${MYPROJ_SERVICE_CLIENT_LIB}
            CONAN_PKG::catch2 CONAN_PKG::spdlog)

This gets more complicated when the library that you are working on is used in multiple places. For example, if I am working on CoreUtil and that library is used by both DatabaseCommon and ServiceClient, I have to change both DatabaseCommon/CMakeLists.txt and ServiceClient/CMakeLists.txt to use MyProj::CoreUtil instead of CONAN_PKG::CoreUtil. This is error-prone to say the least, and why something like @p-groarke's conanfile.txt approach is better because it is done in one place and it automatically translates correctly in conanbuildinfo.cmake.

Another thing I'm concerned about is developers accidentally checking in a modified conanfile.txt or CMakeLists.txt. To prevent this, local mapping is done in a non-version controlled file, something like conanfile-override.txt (listed in .gitignore), containing the overrides.

I'm looking into using a customized CMakeGenerator for this. It can check the local mapping file and translate package names to CMake targets.

@memsharded So I tested export-pkg and the local-flow described in the docs. It does cover my use case quite well for packages with a conanfile.py in them. I'll test with external recipes later.

From scratch and reading the doc only; I was able to clone the master repo, fix a bug, create a local testing package, consume it in a second local project (using package/master for version in my conanfile.txt) and test in under 5 minutes :)

I have some nitpicks, because it is a bit tedious, but it works and I'm very impressed how smooth it was! These features may already exist, if they do, then I'd just recommend mentioning them in the docs.

Nitpick 1 : The folder parameters are ungodly.
Potential solution : Provide 1 argument to set the root temp folder, use step names for sub folders.
So the following

conan source . --source-folder=tmp/source
conan install . --install-folder=tmp/install
conan build . --source-folder=tmp/source --install-folder=tmp/install --build-folder=tmp/build
conan package . --source-folder=tmp/source --install-folder=tmp/install --build-folder=tmp/build --package-folder=tmp/package
conan export-pkg . user/testing --source-folder=tmp/source --install-folder=tmp/install --build-folder=tmp/build

becomes

conan source . --local-tmp=tmp/
conan install . --local-tmp=tmp/
conan build . --local-tmp=tmp/
conan package . --local-tmp=tmp/
conan export-pkg . user/testing --local-tmp=tmp/

Why : conan is already very complex, and users already point out its complexity as a reason not to use it. I personally find it OK, but offering simpler "basic" commands is always welcome.

Nitpick 2 : A setting to force a dependency rebuild. Maybe in your conanbuildinfo.txt, or a command-line option, or both. I doubt this is realistic or possible, but it's worth mentioning in case it is.

As it stands, you will modify your local package, conan create, switch to consumer project, conan install, build, test.
I'm suggesting that in the consumer project : conan install (force rebuild on local dependency), build, test. A much faster iteration loop.

Good day, I'll pop by once I test with external recipes for some more feedback.

@p-groarke

Nitpick 1 doesn't take into account that some of the commands need to reference multiple folders. For example, conan build needs to know source, build, and package folders. So conan has consistent naming for these folders.

One addition that could be explored is a prefix where you give it a base folder, and the other folders are assumed relative to the base. The defaults are already in place because the locations are assumed within the local cache.

Nitpick 2 is covered. You can set the build policy from command-line (consumer) or from within a place recipe (provider). I have not seen is an example of a consumer setting the build policy of a dependency from within its conanfile.txt, but I assume it's similar to how you set options on dependencies from within conanfile.txt.
http://docs.conan.io/en/latest/mastering/policies.html

@kenfred Thank you for the clarification. I was thinking about a prefix-like base folder as well. For example, conan build needs source, build and package folders. You provided with a root folder (in the example it was tmp/), and it searches for default folder names : tmp/source, tmp/build and tmp/package. It is basically a prefix :)

I will look into policies. Good day.

Thanks @kenfred and @p-groarke for the feedback.

The current work I am doing on conan-project actually allows setting the root folder for the dependency, then the rest of folders (build, package) are local to that root folder. However they are defined in file, not in command line, so not exactly what you are requesting here.

Quick question @p-groarke , why for the above flow based in --local-tmp, why not just cd tmp and then issue all commands like conan install ..? Also in most cases, the --install-folder is not necessary, and the default (build folder) is very good. Same for default source folder. Why not the defaults?

@memsharded Well... because I didn't think of that XD I just followed the local-flow documentation to the letter ^_^ I'm still learning how the tool works and the package idioms used aren't natural to me (I'm not devops). I'll definitely do that in the future.

Maybe the example in the documentation should use a subfolder, and mention the folder arguments? I'd by happy to update the documentation if it is open-sourced. Anyways, thx for the tip.

@memsharded I'm interested in the feature you're implementing. I assume this is the plan to allow simultaneous development of multiple packages. How do you intend to make the transition from the dependency being in a local folder to it being published and in the cache? Can you give a rundown f the full workflow?

@p-groarke no problem! XD

Sure, we really welcome contributions and improvements in the docs, we know they are always far from perfect. Yes they are open source too, written in restructuredText: https://github.com/conan-io/docs

@kenfred This is the branch: https://github.com/memsharded/conan/tree/feature/conan_project
These are some tests illustrating some parts of it: https://github.com/memsharded/conan/blob/feature/conan_project/conans/test/integration/conan_project_test.py

Basically, a file, that will map dependencies to local folder. In the tests, only the root folder is being mapped, but the source, build and package folders will be possible too, and also include-dirs, lib-dirs, in case you need to re-map and consume directly from the build folder. Also parameterizing build folders names, to be able to simultaneously manage debug/release x64/x86.

Mostly WIP, investigating what is possible and what is not, learning about the problem before doing a first formal proposal.

Hi @memsharded and @lasote
Why do you move this feature to the next milestone 1.3? :-(

Our team is waiting for this feature and it would be very nice to get it as soon as possible.
Probably also as a first version/beta/incubation/experimental feature which will give you the chance to get more feedback and input from the end users.

Appreciating to get feedback.
Best Aalmann

Hi @Aalmann

There are ongoing efforts in this feature, and we have done some advances. We might be trying to share something sooner, but the truth is that this is very far from ready to be released. I even think that it won't be ready for production in 1.3, and it will require several iterations, but yes, I agree that it would be great to have early feedback on this, so maybe will try to have some preview in a branch or something.

I am sorry to have to delay this feature, but conan keeps growing quickly, and there are also many other very important things in the roadmap, so it is impossible to have all of them as fast as we would like.

Please, keep tuned for a possible preview and discussion of the feature. Thanks!

Here goes the first preview of this feature: https://github.com/memsharded/conan-project-example

It runs from my branch, from sources. It is still preliminary, tested for that example with cmake and VS (tried too with CLion and the VS toolchain and it worked too).

I will continue working on this but feedback very welcome!

@memsharded Thanks for working on this! I've looked over your example project. I have some feedback, but first I need some clarification on your goals for this feature. Can you describe the specific use cases you're trying to address with this feature and the intended workflow?

If I understand the usage, here is how I can envision using it:

  • I'm working on A, which depends on B
  • If I don't need to modify B, I should be able to clone A and a simple conan install will get B from the local cache.
  • If I decide I need to work on B in parallel with A, I can add this conan-project.yml, which will override what's requested in A's recipe with a local folder for the B dependency. This mechanism expects the local folder for B to have a conan recipe. However, it ignores whatever version is in the recipe, because it is being changed in parallel.
  • I finish my work on A and B and want to push my changes. However, I don't want to check-in conan-project.yml because most people working on A just want B to be fetched from a conan package repository. So I have to be very careful about the order in which I push these things:

    • First, I rev the recipe for B, check it in, and push it. This rev needs to increment some recipe number and somehow reference the new git change of B. I also export the new recipe for B to my local cache and upload to a remote.

    • Second, I delete or disable my conan-project.yml so I can verify A will use the new B package. I update A's recipe with B's new package revision, and redo conan install and build and test.

    • Third, things look good for A using the package for B, so I can finally push A.

It is this commit/push workflow that I think is especially error-prone and difficult to manage with this conan-project feature. Here are some of the things that can and will go wrong:

  • I exported B to my local cache and a remote, but failed to rev the recipe, thus breaking the previous B package everyone was using. (I wrote over what should have been an immutable package).
  • I remembered to rev my recipe for B and exported to my local cache, but forgot to upload to a remote. Anyone else using A will be broken.
  • I remembered to rev my recipe, export to my local cache, and upload to a remote, but I failed to update the reference within B's recipe to point to the new git change for B. Once again, A will have the wrong B and it will be broken.
  • I did everything correct in picking a new rev for B, updating the reference in the recipe, exporting to the local cache, and uploading to the remote. However, my co-worker did this at the same time as me and there is no management for merge collision. The person who pushes last will win and the others will get their recipes clobbered.
  • I pushed A but forgot to update its B dependency on the new revision.
  • I pushed A before I did anything for B.

Any ideas on how to make this less error-prone? Of course, some of these potential errors could be avoided with more robust CI practices, where changes to recipes are serialized through your CI pipeline and breakage of A is flagged immediately.

Some ways conan could mitigate these:

  • Packages are immutable once published. (This has been discussed in other issues).
  • Automatic, server-side recipe revisioning. With rapid co-development on A and B across multiple people, a manual convention for updating B's revision will be very tedious and have frequent collisions.
  • Some handling for revision collisions. At least uploads should be rejected if overwriting an existing recipe. Alternatively, the previous suggestion of handling revisioning may be all the collision avoidance you need.

Some other questions: Is it practical to have a new package for every B change in git? How can this be done without so much manual overhead in maintaining the package? What is the workflow if you're doing a long-running feature branch on A and B? Do you just have tons and tons of packages for B on the repo?

Thanks,
Ken

Hi @kenfred

Yes, you got conan-project idea and current proposal perfectly. It is intended for simultaneously modifiying more than 1 package without needing to use the conan local cache at all. It is totally possible to have different conan packages edited at the same time in a single Visual Studio solution, for example, and changes in source code in B, will do a re-build of B when you launch A executable from the IDE.

I agree there are some things that have to be carefully managed when working simultaneously in different packages, but as I see them they are highly related to CI, not this conan-project feature. Because the issues you describe can happen already if you are making changes to packages A and B in separate projects.

Most of the suggestions that you describe are sometimes possible or ongoing work:

  • Artifactory allows to set different permissions, removing "delete" permissions to packages or repos can achieve package inmutability in the server
  • Revisions is ongoing work, there is already a proof of concept here: https://github.com/conan-io/conan/pull/2720. It is very preliminary, and very high risk, so it will take time to mature and will likely evolve a lot. But it is being subject of ongoing efforts.

Please try to follow conversations in the respective issues, or open dedicated issues for different questions, as it is very difficult otherwise to properly respond.

Is it practical to have a new package for every B change in git? How can this be done without so much manual overhead in maintaining the package?

It depends on your level of reproducibility, traceability, etc. You might be good with a "develop" channel for next version that is continuously being overwritten, not necessary to keep a versioned package for each git commit.

Regarding your feedback, I think there might be a use case that is related to the conan-project, that is the conan create one. Maybe we want the conan create command to behave in order when there is a conan-project involved, in the same way it affects the conan install, it could fire the conan create of different local packages in the right sequence, to make sure everything works as expected in the local cache.

Hi @memsharded ,

tried to test your example, but sadly it doesn't work due to missing / not found requirement "HelloB".
The output of D:\Work\_conan\_test\conan-project-example>conan install A --build -if=build is

Using conan dev-version checked out at D:\Work\_conan\_test\conan\ with git params:
On branch feature/conan_project
Your branch is up-to-date with 'origin/feature/conan_project'.
nothing to commit, working tree clean
origin  https://github.com/memsharded/conan.git (fetch)
origin  https://github.com/memsharded/conan.git (push)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
==========================================================
Testing conan project at D:\Work\_conan\_test\conan-project-example with git params:
On branch master
Your branch is up-to-date with 'origin/master'.
nothing to commit, working tree clean
origin  https://github.com/memsharded/conan-project-example.git (fetch)
origin  https://github.com/memsharded/conan-project-example.git (push)
==========================================================
HelloB/0.1@lasote/stable: Not found in local cache, looking in remotes...
HelloB/0.1@lasote/stable: Trying with 'conan-center'...
ERROR: Unable to find 'HelloB/0.1@lasote/stable' in remotes

The used batch file for conan calls is (directly within the cloned conan dir):

@echo off
set WD=%CD%
echo ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
echo Using conan dev-version checked out at %~dp0 with git params:
cd %~dp0 && git status -u no && git remote -v
echo ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
cd %WD%
echo ==========================================================
echo Testing conan project at %CD% with git params:
git status -u no && git remote -v
echo ==========================================================
python %~dp0\conans\conan.py %*
@echo on

Any idea? My fault or something missing within conan?

Thanks.
Aalmann

Edit: I used a fresh and clean cache and did not build B and C first.

Yes, this is very weird. The first lines that you should see should be something like:

PROJECT: Installing C:\Users\memsharded\conan-project-example\A\conanfile.py
Requirements
    HelloB/0.1@lasote/stable from local cache
    HelloC/0.1@lasote/stable from local cache
Packages
    HelloB/0.1@lasote/stable:19056882ab002381c35270bdb67b66b12d5b6ea6
    HelloC/0.1@lasote/stable:6cc50b139b9c3d27b3e9042d5f5372d327b3a9f7

HelloC/0.1@lasote/stable: Calling build()

Instead of that, the first line is trying to retrieve HelloB. I would say it is something within your launch script, but couldn't identify it so far. I would say it is not finding and loading conan-project.yml at all. You might check that by introducing garbage in that file and trying again. I have also added a message in the output, reporting about the found and used conan-project. If there is no message, then there is an error. If you want to debug the reason why the conan-project is not found, maybe try to add a few prints in the get_conan_project() method.

Many thanks for reporting this.

It was my fault. :-(
I forgot to set python path correctly and so the installed "1.2.3" version was used.

Sorry for that. Now your example works and I'm able to test it and give you a better feedback.

Did some first tests.
At first of all from a technical point of view great job. :+1:
I will test more as soon as possible, but I want to give you a first feedback of some issues.

Issue 1:
Doing it exactly as described in your example works fine. All generated CMake and Visual Studio files are created besides the conan-project.yml.
To have a clean source tree I used a "meta" directory besides conan-project.yml. My script looks like this:

@echo off
set CMAKE_GENERATOR="Visual Studio 14 Win64"
echo clean up dirs
echo ----------------------------------------
rd /S /Q A\build || echo:
rd /S /Q B\build || echo: 
rd /S /Q C\build || echo:
rd /S /Q meta || echo:
echo creating A and all requirements for Release
echo ----------------------------------------
cd A
call conan install . --build -if=build
call conan build . -bf=build
cd ..

echo creating A and all requirements for Debug
echo ----------------------------------------
cd A
call conan install . --build -if=build  -s build_type=Debug
call conan build . -bf=build
cd ..

echo Meta project
echo ----------------------------------------
mkdir meta
cd meta
call cmake .. -G %CMAKE_GENERATOR%
call cmake --build . --config Release
cd ..

echo finished
echo ----------------------------------------
@echo on

The only difference is the meta dir. But it fails at CMake generate with:

CMake Error at A/src/CMakeLists.txt:4 (include):
  include could not find load file:

    D:/Work/_conan/_test/conan-project-example/meta/A/build/conanbuildinfo.cmake


CMake Error at A/src/CMakeLists.txt:5 (conan_basic_setup):
  Unknown CMake command "conan_basic_setup".

This is because the "original" build dir (\

Issue 2:
The workflow itself is a bit complicated (for some users).
Just for clarification: First conan install with --build param is called on A which only builds B and C (btw. is a build of B and C really required at this point?). I think the --build param has no effect (I can't see a built of A with or without --build but always a built of B and C). After that, conan build is called for building A. Now everything is built. Not sure if the built of all Modules is really required at this point, if not it could save a lot of time.
But the main thing is, that one have to know which is the "top module" of the dependency tree (in your example A) and this "top module" has to (transitive) require all other modules from conan-project.yml. So when I run the script on B instead of A, A is missing in the solution and the build fails, because A is also generated into the meta-CMakeLists.txt. And what if there is no "top module" (e.g. if A depends on C and B depends on C, but A not on B)
Wouldn't it be easier (for all users and you) to treat the conan-project.yml as a "special recipe". What I mean is, I would suggest to make it possible to call conan install conan-project.yml -if=build which (internally) creates a "project" recipe which contains all the requirements listed in conan-project.yml and depending on the given generator a standard build method with cmake calls (and the new should_configure flags). The biggest advantage would be, that the user don't have to use cmake calls in commandline anymore. Just conan build on the generated recipe and everything works out of the box.

Issue 3:
Please make the meta-CMakeLists.txt path configurable to be able to always have a clean source tree.

Best Aalmann

Thanks very much for testing and your feedback!

Regarding:

The biggest advantage would be, that the user don't have to use cmake calls in commandline anymore. Just conan build on the generated recipe and everything works out of the box.

The truth is that the biggest use case of the conan-project is not having to call conan at all, neither conan build. Everything can work from the IDE or from the build system directly.

Regarding defining the root of the graph, I think it would be cleaner to just define the root (or multiple root) packages in the conan-project.yml file. In this case, it is not necessary to create an extra "project" recipe. I will consider this with more time, the idea sounds interesting, but not sure how it will scale to other build systems too.

For me the biggest priority is to fix the meta folder for keeping a clean tree, I will focus on that for next iteration of conan-project.

Cheers-

The truth is that the biggest use case of the conan-project is not having to call conan at all, neither conan build. Everything can work from the IDE or from the build system directly.

If this is really what you think, you should reduce conan commands to just have conan install and conan upload and the rest has to be removed and done with the native build/scm system. But to be honest, that wouldn't help at all and would not be my approach.
You have to differentiate between the "common developer/user" and the "pro developer" and ask yourself what the real use case of conan-project is.
IMHO the use cases/requirements are:

  • "As a developer of a bunch of software modules I want to be able to generate a complete software project (e.g. Visual Studio solution) which contains all required and/or listed modules."

    • So far, this is done with your current approach.

  • "As a common developer of a bunch of software modules I want to be able to create the project with just a few (or one) simple command and the 'generating system' has to take care about everything."

    • The so called "one-click-solution".

    • Not really done. Currently there are too much different commands and context switches (conan install, conan build, cmake ... for meta project creation and IDE itself). This is hard to automate AND to document for different projects. And calling CMake directly (especially the generate command) is very error prone, because the user has to know all relevant CMake definitions which are normally set in the recipe. This is not manageable for 5+ modules.

    • The better approach would be to install and generate with conan install conan-project.yml ... and create a solution conan build --configure conan-project.yml. That would also fit the pro user who can change generated files between conan install and conan build. See below for additional ideas concerning "project recipe".

  • "As a pro developer of a bunch of software modules, I want to be able to locally change and adjust the project to my needs."

    • See the point above. Assuming that the generated files can be changed all the time and will be overridden after new install command.

  • "As a developer of a bunch of software modules I want to be able to generate the project which contains modules using different build systems underneath."

    • Something for a later iteration. Could be done by having a "major" build system (like CMake) which triggers/calls other (wrapped) build systems (Autotools, Scons, etc.).

  • "As a developer of a bunch of software modules I don't want to call conan commands when developing within IDE."

    • This is not contrary to the conan install and conan build -c ... approach. If everything required is already present AND configured, there is no need to call conan and one can work within the IDE.

  • "As a developer of a bunch of software modules I want to be able to select or work on just a subset of modules of a complete dependency graph."

    • Not sure if this is already part of this feature. But just to make sure that it is taken into account (e.g. deptree size is 10 and project size is 5, the other 5 have to come from local cache)

Regarding defining the root of the graph, I think it would be cleaner to just define the root (or multiple root) packages in the conan-project.yml file. In this case, it is not necessary to create an extra "project" recipe. I will consider this with more time, the idea sounds interesting, but not sure how it will scale to other build systems too.

The project itself has to be addressable. This would be the most obvious to the user. Let's take your example. A requires B requires C. A very simple dependency tree but leads to problems when calling the install on B instead of A and A is part of the project (just take my or your example calls and cd into B instead of A to see what I mean). Let's enhance this dependency tree with Z (or A--) which also depends on B and is part of the project (A reqs B reqs C; Z reqs B reqs C). Currently that would or could end in an error when calling conan install on A and not on Z. And the user is confused about how to solve this problem.

The extra-project recipe either generated from conan-project.yml or hand-written could derive from a new class ConanProject which is able to call/loop the source() and build() and package_info() and all other recipe methods to get all required information and actions. When calling build() of a "normal" recipe the CMake helper could be enhanced to just collect the CMake-defs but skip the build itself. The same for other build helpers. Everything could then be merged together in the conan project class for generating the solution. And the pro developer could add code or reimplement conan project methods. And probably the conan-project.yml is not required anymore.

For me the biggest priority is to fix the meta folder for keeping a clean tree, I will focus on that for next iteration of conan-project.

Thanks.

Best Aalmann

Any reason why this was removed from 1.3 w/o any target milestone?

We're eagerly awaiting this feature and @memsharded's preview branch looked very promising to us. Maybe including a first version into a release would help more users get involved and supply feedback with regard to this...

First version available in 1.4.4
Thanks @memsharded and @lasote for providing this as an official beta feature.
I will test it as soon as possible and give you feedback.

@memsharded do you like new issue(s) with link to this one OR should I feedback here?

Could we please get an option in conanws.yml to specify the name of the generated cmake file?
I'd like to include the generated cmake file from my own CMakeLists.txt.

@memsharded
I've been using the workspace feature and have some feedback:

  • The workspace file expects that you are co-developing conan packages. So you place them each somewhere on disk and create a super project point to them, including declaring a root package. However, we want to use the workspace file within a monorepo or git submodules situation where multiple modules in the repo have their own conanfiles and you depend on other modules within the monorepo. In that case, the workspace file needs to be peer to root package's conanfile and checked in that way. A few things come out of this:

    • It would be nice if "root" was optional. If the yml is next to a conanfile, it can be assumed to be the root.

    • We're using the workspace to resolve all of the dependencies of the modules, but we're using add_subdirectory to declare CMake targets. That means the targets that conan declares for us are redundant and possibly conflicting. It would be nice to have a key in the yml to tell conan not to make targets.

    • The generation of CMakeLists.txt in this case is bad because it will overwrite the one we have in that folder. So we just don't use the cmake generation. Nothing to change here, just giving detail.

    • Since the modules in the monorepo aren't packages, they don't necessarily have package names. However, the workspace feature forces me to give them package names because I need to follow the same requires pattern at the root. It should be sufficient to point to each of their conanfiles from the workspace file to do all of the dependency resolution without having to give them package names. This also forces me to have a root conanfile when the workspace file could be sufficient and act in its place.

  • When using workspaces, you must manually specify the install folder or else it assumes you want to put the generated files by the workspace file. In our case, that means it puts it in-source in the root package's directory. It would be more consistent if it worked normally: I can mkdir a folder for an out-of-source build and do conan install <path_to_conanwsyml> and it assumes PWD is the install folder. Same desire for build folder.

Also, having multiple roots doesn't appear to have any possibility of working in 1.11.1 because the conan file loader always asserts there's only one reference in load_virtual if the scope_options argument is True, which is the default for the argument.

I'm also not quite sure I understand why the root key exists in the conanws.yml, conceptually all I want is an easier way to collect the dependencies of multiple conan packages in subdirectories and put them in the same cmake project so my IDE works, is there an easier way to do this?

@lawrencem99

The original and intended purpose of the workspace feature is to allow
co-development of interdependent packages. So for this, the workspace file
is meant to be an ephemeral way to have the root package look on disk for
the other packages instead of in the local cache.

There has to be a root and there can only be one.

However, I think the workspace idea should accommodate the workflow where
you need to consider multiple Conan files in a tree, where those Conan
files are not necessarily package recipies. Also, in this case, the
workspace file itself could designate the root and no root key would be
necessary.

On Fri, Dec 21, 2018, 10:17 PM Lawrence M <[email protected] wrote:

I'm also not quite sure I understand why the root key exists in the
conanws.yml, conceptually all I want is an easier way to collect the
dependencies of multiple conan packages in subdirectories and put them in
the same cmake project so my IDE works, is there an easier way to do this?

—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/conan-io/conan/issues/1377#issuecomment-449546140,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABOLqNxmPBuCDoTEdid9pfW1WrychAM6ks5u7cBSgaJpZM4N2ERn
.

OK that makes a lot of sense, I particularly like the sound of the last part :)

Maybe the doc page on conan-project should have the part about multiple roots removed?

https://docs.conan.io/en/latest/developing_packages/workspaces.html (this one)

A lot of different use cases have to be considered.
At first of all in conan layer.

  1. Having multi repos, currently not cloned/ checked out, each repo containing the conan file and a team shared workspace file
    There should be an scm entry in workspace file to be able to get all the sources and conan files

  2. Same as 1 but with central recipe repo. (also fits monorepo use case)
    Useful for 3rd party source repos where one can not add a conan file. In this case the workspace file should contain the scm info of central conan files repo, clone it and after that call the source method of each referenced conan file.

Advantage of 1 and 2: because you (can) share the workspace file, the relative file layout is always the same on different development machines.

After that one can think about the CMake layer. CMAKE_INSTALL_PREFIX has to be different for each module but can only be set once and globally at configure time. Another point: assuming bigger modules with a lot of components (like Qt, open scene graph or other) and which can only be used correctly when using find_package and which have a working CMake config file (speed with CMake or with the package itself). One will/can never "reimplement" all the information in conan package_info.
So the CMake files of each module in workspace has to take at least these points into account.

Probably a better approach could be to have a DSL as meta-meta-buildsystem-definition (I know it sounds crazy) which describes what to build and how. With that one could generate the CMake layer and it could fit better to the various use cases.
Btw having these information one could also generate files for other build systems like a complete gradle or bazel layer.

With all of that one could remove root or make it optional.

Best
Aalmann

@lawrencem99 I never noticed that bit about multiple roots. Looks like I'm mistaken.

What I was referring to is the concept that a dependency tree must be an acyclic graph that funnels into a single package endpoint. I mistakenly carried this idea forward to how the conan workspace should function. It does indeed appear that the workspace allows for multiple roots.

One legitimate reason you might do this is if you want to develop a library that is common to two roots and make sure that both roots build with the developing library (using a single make command from the generated super project to build everything when the lib source changes).

However, it comes with a big caveat: "all of them will share the same dependency graph". In my view, this makes the roots indirectly dependent on one another and this should be generally avoided. It's also an arbitrary limitation to force the roots to share the graph. I would prefer it if the conan workspace would do a conan install invocation per root and allow them to have independent graphs. The generated super project would still create a super makefile that ensures you can build all the roots at once in response to the changes to the common lib. @memsharded Would that be a way to alleviate the same dependency graph limitation?

By the way @kenfred for the second point in your first post, about Conan's targets conflicting with those you already have;
I seem to have avoided this problem because I use the cmake_find_package generator; The consumer libraries always link to an interface library created by find_package instead of to a target defined in a subdirectory. The tooling still works (at least in CLion) because CLion detects that the consumer library's includes from the other library are within the project (at least I think that's why the tooling still works).

Hi!

I would like to comment here the current status of workspaces:

  • they were released as experimental
  • There are some limitations to current approach, for example, it only works for homogeneous build systems.
  • We are doing a step back, implementing the concept of "editable" packages. Which this approach, you can put any package in "edition" mode, and it will be consumed directly from user space, not from the conan local cache. Similar to workspaces, but without the concept of aggregation. There will be a way also for mapping the layout to user folders layout.
  • It is advanced, current status is here, might be released in conan 1.12: https://github.com/conan-io/conan/pull/4181
  • When "editable" packages are there, we will rebase the workspaces on them, so a workspace will be a collection of packages under edition, plus some build system high-level integrations if possible. The conanws.yml might be greatly changed.

Hi @memsharded, first of all I want to say thank you for all the great work with conan project you folks are doing!

I didn't know about this editable packages project and was actually sketching out my own conan-link functionality (no code, just spec yet). It was following same two step link pattern as npm-link (https://docs.npmjs.com/cli/link.html) with the conan flavor. I am trying to avoid any changes to original packages. Workflow is oriented for Recipe and Sources in the Same Repo scenarios only.

  • conan link <ref> inside the package folder turns on linkable mode (<ref> is optional if recipe has full info):

    • changes all the copy commands in exports_sources and source are now creating symlinks instead of making a copy. turns on 'keep-source' and 'keep-build' (to allow incremental builds), but still calls build() for that recipe

    • uses separate local cache root of linkable packages

    • conan link <ref> inside the different package consuming 'linkable' package:

    • switches dependency <ref> to its link.

    • generator adds a pre-build step to build all the linked packages using conan create

    • stores link in conanlinks.txt inside the package folder

Can you please let me know if editable packages project is similar or different to this approach? and do you see any downside of the approach I have described above?

Hi @anton-matosov

Not exactly. The general idea is (cc @jgsogo who is developing this feature):

  • conan link <path> <ref> defines a link between the package reference in the cache and the
  • Now, when some other package requires <ref>, it will be using the information in the <path>, it won't read anything from the cache. In fact the cache could be empty.
  • The cache is not modified by this operation. If it had packages installed for <ref>, they will be kept. When the link is removed, the packages will still be there.
  • Consumption of <ref> for consumers is transparent, they are not aware the dependency is in the cache or in the user space
  • There is a mapping of folders, so it is not necessary to "package" or put the artifacts in the final layout. In that way it is enough to do the normal build, inside user folder, and consumers will link directly with those built artifacts. The mapping is a file or files with includedirs, libdirs, etc.

It seems in your approach, conan is still calling build() and package(). The idea of editable packages is to completely avoid conan calling those methods, but still being able to use the artifacts that are built directly by cmake, visual studio, make...

Please tell me if this clarifies it a bit. Thanks!

@memsharded

Can you please describe the whole developer workflow starting from "nothing" on the local machine and based on the link and editable feature and using the example of @kenfred with A->B->C->D (developed by team 1) and Z->X->C->D (developed by team 2) and what to do and prepare on which layer (Conan, CMake, SCM, ...)?

Currently I'm completely out of sync with the different approaches and features and can't see a good workflow for all developers especially in different teams (probably with different conan config install URLs).

Thanks.

@lawrencem99 Thanks for the info! I've been wanting to transition over to the find_package generator to allow our CMake scripts to be conan-agnostic. However, in this particular situation, we still want to use add_subdirectory because we want all the subprojects to show up together in an IDE. Are you saying CLion will show them all in the IDE even if they are found via find_package? If so, that is a great feature! A lot of people jump through hoops to co-develop their libs with their applications in an IDE and are forced to use add_subdirectory when 'find_package' would be more appropriate and provide better build encapsulation.

@memsharded Do you expect this "editable package" feature to enable a scenario where you have multiple conanfiles in a directory tree? Once again, this is plausible in cases of monorepos and git submodules where you have more than one "package" in the repo tree. We want these submodules to be able to declare their own external dependencies and allow the root package (somewhere else in the tree) to depend on the submodule packages, ultimately resolving the full dependency graph transitively. (Note: in this scenario, the submodule packages don't have to have a name or version, but I'm willing to give them those if it necessary to make it work).

@kenfred Yeah, that's what I'm saying, except you still have to use add_subdirectory for submodules, it's just the consumers of the library aren't linking to the targets defined in the subdirectory CMakeLists, they're linking to the interface library for it {libraryname}::{libraryname} which comes from a conan-generated Find{libraryname}.

@memsharded With the new editable packages, will the workspace still add them as subdirectories so the tooling works?

Yet another point about workspaces:

There is a problem with imported build tools. If one uses virtualenv generator to add the tools to PATH before building a package using activate.sh, then with workspaces one ends up with a potential proliferation of activate.sh scripts, each in subdirectories of the build folder, and having to call them all before building.

It would be better if the virtualenv activation scripts were merged in workspaces to activate each project's virtualenv.

This is, of course, assuming that going forward workspaces can have a shared build for all the packages being edited. through cmake add_subdirectory or otherwise, if not then I guess separate builds are necessary and the virtualenv problem goes away, it's just if that happens then my whole workflow is shot because I rely on the editable packages being in the same overall project for my IDE tools to work, like refactoring support across all the packages.

The problem with imported build tools is also mentioned in #4009.

Similar to the virtualenv generator, cmake_find_package generator puts FindXXX modules in subdirectories of the build directory, those should probably be merged and placed into the root buld folder in the case of workspaces.

Right now I have to hard code set(CMAKE_MODULE_PATH ${CMAKE_CURRENT_BINARY_DIR)) in each subdirectory's CMakeLists. While this is the way it's done in the Conan tutorial for cmake_find_package generator it is a hack.

Better way is to -DCMAKE_MODULE_PATH=conan_build_dir on cmake command line but it isn't maintainable to add each subdirectory's build subdirectory to the path this way.

If I remember correctly we can have only one version of packages in the graph of all the workspace packages so we shouldn't have to worry about versions for the merging of the modules list.

Workspaces were added some time ago, most of the comments in this thread have already been addressed.

A new iteration on the workspaces has been done in #4481, merged to develop will be released in Conan 1.13.

The documentation for the Workspaces is in https://github.com/conan-io/docs/pull/1086. Please do have a look to it once it is released (or now from the source develop branch or from latest PyPi-testing package), and open new issues with [workspace] in the title. Thank you!

@memsharded super happy this is becoming production-ready!

Was this page helpful?
0 / 5 - 0 ratings