Conan: Conan ignores other repo if package found on first

Created on 5 Apr 2017  路  18Comments  路  Source: conan-io/conan

Hello!

I have two repos in registry.txt. First - conan.io, secod - my own.

If I try to install some pacakge existing in conan.io (for example gtest/1.8.0@lasote/stable) conan print than conan.io repo does not contain package with my settings.

WARN: Can't find a 'gtest/1.8.0@lasote/stable' package for the specified options and settings:

  • Settings: arch=x86_64, build_type=Release, compiler=clang, compiler.libcxx=libc++, compiler.version=4.0, os=Linux
  • Options: cygwin_msvc=False, shared=True

I think it should search pacakge also in second repo where needed version is exists.

question

Most helpful comment

What happens if you have two different package recipes with the same name in two different remotes?

I still dont get that point. Conan searches for binary packages in both

  • the local cache
  • the associated remote

Local cache and remote may have different package recipes when the local recipe is outdated. Why is it ok to use an outdated package from the local cache, but it is not ok to take it from an additional remote?

In our use case, we use bincrafter's boost packages. There are a lot of packages, dependencies and binary packages. It's easy to build missing binary packages in the local cache, but we want to share the binary packages using our company's conan server. Changing the package user name is one solution, but it has some drawbacks:

  • not easy to change all the requirements in the recipes
  • hard to maintain/merge
  • we cannot use the bintray binary packages anymore ->build everything on our own.

Our prefered solution would be if conan would support an ordered list of associated remotes. The first one would be the "master" remote (with the latest recipe). All other remotes would be handled similar to the local cache.

All 18 comments

No, that is not possible. Please take into account that each package binary is bound to its package recipe, and both are stored jointly in every remote. Package binaries have to be retrieved from the remote where the recipe was retrieved, for consistency. If you want to install gtest from your second remote, you can specify it: conan install gtest/1.8.0@lasote/stable -r=myremote.

Also, you probably want to create packages in your own remote under your own usernames, not using "lasote" one. That can be easily done, with the conan copy command or while generating binaries from the package recipe.

Hmm. Why? If I changed repo order (set my repo first. conan.io - second) all works. Conan first search package in my repo if found, download it. If not found, search in conan.io, download recipe and build it. After upload package to my repo. And next time prebuilt package for my architecture can be found in my repo.
So why this is not possible to search all repo always, independently repo order?

Yes, when there are no local package recipes, conan starts to look in the registered remotes, in order. Once it finds one recipe, all the binaries it will retrieve will come from that remote. The logic for using the repo ordering is because some logic is necessary. What happens if you have two different package recipes with the same name in two different remotes? One of them has to be retrieved, you cannot have two different recipes for the same package. So then the order of remotes is applied, or the user can override this behavior and explicitly define the remote to get the recipes.

What happens if you have two different package recipes with the same name in two different remotes?

I still dont get that point. Conan searches for binary packages in both

  • the local cache
  • the associated remote

Local cache and remote may have different package recipes when the local recipe is outdated. Why is it ok to use an outdated package from the local cache, but it is not ok to take it from an additional remote?

In our use case, we use bincrafter's boost packages. There are a lot of packages, dependencies and binary packages. It's easy to build missing binary packages in the local cache, but we want to share the binary packages using our company's conan server. Changing the package user name is one solution, but it has some drawbacks:

  • not easy to change all the requirements in the recipes
  • hard to maintain/merge
  • we cannot use the bintray binary packages anymore ->build everything on our own.

Our prefered solution would be if conan would support an ordered list of associated remotes. The first one would be the "master" remote (with the latest recipe). All other remotes would be handled similar to the local cache.

Local cache and remote may have different package recipes when the local recipe is outdated. Why is it ok to use an outdated package from the local cache, but it is not ok to take it from an additional remote?

The cache is just that: a cache. It is an optimization so it is not needed to retrieve packages from remotes all the time. Of course, when you are creating or modifying packages locally, it will be not in sync with the remote, and also it can get outdated if new packages are uploaded to the remote from somewhere else.

The remotes are "source of truth", but they are decentralized. You can have a Pkg/0.1@user/channel in remote1 and another Pkg/0.1@user/channel in remote2. The recipes in those remotes for that package can be totally different. From the options they take, the settings they consider, how they model the package_id() or the info provided by package_info(). Each remote can have different binaries for different configurations.

There are no guarantees in remotes regarding consistency, an minor inconsistencies between recipes would lead to extremely difficult to debug problems. While in the local cache, it is guaranteed that all binaries come from the same origin.

I am not fully sure why you need to change bincrafters packages user name, can't you just get them and put them in your server, as they are? So, if you could store a copy of the bincrafters packages in your company server, everything would be good?

In any case, I'd say this is something I would always do. After the NPM debacles, like the leftpad, I am pretty sure that I don't want to depend on external infrastructure on production, so I would always store a copy of my third party dependencies under my control. Also, this can help minimizing security issues. I am not saying not ever trust external binaries, but most of our users are indeed building their own binaries and storing them in their private servers, and I think it is good practice.

Ok, I understand what you mean. Mixing remotes may be dangerous. But isn't it the same with conan remote update_ref? I found this discussion (#2091) where you suggest to switch between different remotes for the "same" package.
Another question: If we just copy the packages to our server unchanged, as you recommend, we would need to remove all public remotes from our registry to avoid the kind of problems you describe, right? Otherwise missing requirements may be fetched from the "wrong" remote by accident.

If we just copy the packages to our server unchanged, as you recommend, we would need to remove all public remotes from our registry to avoid the kind of problems you describe, right?

No you just have to make sure that your personal remote comes first in the order of remotes.

I am successfully using the approach memsharded explained (not changing the package user name) and everything works as expected.

Ok, I understand what you mean. Mixing remotes may be dangerous. But isn't it the same with conan remote update_ref? I found this discussion (#2091) where you suggest to switch between different remotes for the "same" package.

No, the update_ref will update the remote for everything: both the recipe and all the binaries will come from the new defined remote. So still the binaries will be referencing the correct recipe always.

Another question: If we just copy the packages to our server unchanged, as you recommend, we would need to remove all public remotes from our registry to avoid the kind of problems you describe, right? Otherwise missing requirements may be fetched from the "wrong" remote by accident.

No, not necessarily, @bilke is right, using the right remote order works.

In any case, depending on the scenario/product and security concerns, I wouldn't do that. What I would do is:

  • All developers have remotes defined in their machines (and defined with conan config install) that are within the company. No external references.
  • When a third party is to be introduced, this is done by someone qualified and authorized. They will be auditing the external dependencies, fetching them, maybe building the binaries in-house to make sure, maybe even forking the original packages recipes to completely own them, just in case. And finally uploading them to the company remotes. I might be a bit too much concerned about security, or I might not, but prefer to be on the safe side.

@bilke I guess it only works as long as conan finds all required packages in your personal remote. If someone is using unexpected options or settings, it can still happen, that packages from both remotes are mixed.

@memsharded Your recommended solution may be fine for most companies. We are a research company. So we have a colorful mix of serious software products and experimental research projects. No external references is not an option. We need a solution where we can mix external remotes (bintray) with our own prebuilt packages.

If someone is using unexpected options or settings, it can still happen, that packages from both remotes are mixed.

Thought about it again. Only a completely missing recipe would be a problem. It is still possible (e.g. with conditional requirements) but unlikely.

@bilke I guess it only works as long as conan finds all required packages in your personal remote. If someone is using unexpected options or settings, it can still happen, that packages from both remotes are mixed.

No, even if settings and options are different, packages from different remotes won't be mixed. Once a recipe comes from a remote, all the binaries will come from that defined remote.

@memsharded Your recommended solution may be fine for most companies. We are a research company. So we have a colorful mix of serious software products and experimental research projects. No external references is not an option. We need a solution where we can mix external remotes (bintray) with our own prebuilt packages.

Totally, this works by default, you can have packages from bintray and also your own packages from your own remote. Just you cannot mix binaries from bintray and from your own remote for the same package, because of the above mentioned inconsistencies. If you are going to build your own binaries for a given package, and upload them to your own remote, just remember to conan download the other binaries from bintray before uploading them to your remote. I would say that is a much better approach and will lead to way less pain than mixing binaries from different remotes.

@memsharded I am actually seriously contemplating how to resolve this issue from a broader perspective, and have been for a few months. It was literally the first thing I ran into when I created my first Conan talk. Certainly, for Artifactory we will need to come up with a solution that more smoothly addresses this problem so that we can eventually support complex topologies with remote and virtual repositories.

One thing I've been contemplating is checksum-verifying the package recipe, and if the recipe is the same allowing the binaries to be aggregated. We might decide to do this strictly server-side, but its interesting to discuss from different perspectives.

@wpalfi one thing I am looking for is a very specific use case where you actually have to aggregate binaries for the same recipe from multiple sources, other than the convenience issues with conan-center.

One thing I've been contemplating is checksum-verifying the package recipe, and if the recipe is the same allowing the binaries to be aggregated.

Yes, sure, you can check the checksum. But that brings "interesting" (maybe better say challenging) scenarios, for example when some recipe is updated in one of the remote, and the other doesnt. Then, you should be removing the binary (which one is the outdated now?) and that could be very confusing to the user: there was a binary there in their local cache, and suddenly after an update, there isnt, even if it is still in the remote it used to be, but it happens that the recipe was modified in a different remote...

I think the Conan client should keep his behavior. To solve the issue mentioned by @markgalpin about aggregating binaries I think that it should be added (maybe opt-in or opt-out) when we implement the virtual repositories in Artifactory. So the conan client will connect just to a remote and Artifactory will manage that situation transparently.

Since the latest conan version (1.5), if you install the recipe from foo and then you specify a -r myremote in the conan installcommand it won't deny retrieving the binary from `myremote'. So I think this is fixed. Feel free to comment if it doesn't.

I found the current behaviour a little surprising.

We use Artifactory and have an unstable developer repo any internal developer can upload to, and a 'stable' repo only published to via our official CI tasks.

By default, we set up developers to look in the unstable developer repo first, then the stable repo, in order to allow developers to see developer test versions of various packages.

But we ran into issues: For some work in progress to explore a newer Visual Studio toolchain a developer published a pre-built binary package for 'mypackage/3.10.0.0@PORT/stable'. We didn't intend for anyone else to start consuming that flavour yet. The same version 'mypackage/3.10.0.0@PORT/stable' recipe was available with the team official supported toolchain under the stable repo, but Conan would fail out with:

WARN: Can't find a 'mypackage/3.10.0.0@PORT/stable' package for the specified settings, options and dependencies:
[windows_build_and_unittest] - Package ID: d5817f13e39c3c4195e51ee5243d672b6c26d94b
 ERROR: Missing prebuilt package for 'mypackage/3.10.0.0@PORT/stable'

I now understand the issue to be what's discussed in this ticket.

I think I would have expected that Conan would have continued searching until it found a package version and package id that matched what was required, instead of stopping when it found the first recipe that matched the version and then failing out when the recipe on that remote didn't contain the required package id.

That only will happen with the "revisions" activated, where you will be sure that the recipe is the same, so totally make sense to share the binaries. We cannot break the current behavior, if the recipe is from remote1 the packages will look in remote1 unless you specify the -r remote2.

For us it is sometimes the case that an identical recipe would be present with different binary package flavours in our 'unstable' repo.

I understand the idea of selecting the first matching recipe from the first remote where found, then preferring that remote to avoid ambiguity in all subsequent recipe fetches, but I would have thought it was possible even without the revisions feature activated to continue scanning and collect all binary package flavours from all configured remotes that have identical packages to the one found in the first repository.

Was this page helpful?
0 / 5 - 0 ratings