Cocoapods: Exit with a special error code when `pod install` fails due to an out of date specs repo

Created on 13 Oct 2016  路  13Comments  路  Source: CocoaPods/CocoaPods

馃寛

What did you do?

pod install on a computer (a CI) who has not done a pod repo update recently with a PodFile.lock referencing a brand new version of a pod as dependancy.

What did you expect to happen?

I know that pod repo update is not automatic anymore to reduce pressure on Github servers. I know that I could pass the --repo-update parameter to pod.

But It would be great if pod could detect the situation and say "Oh, hey, I see that PodFile.lock require a version that I don't have on my repo local copy. Let's try to do a pod repo update and pod install again !"

I think it would be a good balance between the need to have something that works every time and the execution time / servers charge. Especially when you use Cocoapods on a CI.

At least, it could be an option like --repo-update-when-needed (or something less verbose :/)

What happened instead?

[13:00] onejjy@Mac-Julien-3:Demo $ pod install
Analyzing dependencies
[!] Unable to satisfy the following requirements:

- `RealmSwift` required by `Podfile`
- `RealmSwift (= 2.0.1)` required by `Podfile.lock`

None of your spec sources contain a spec satisfying the dependencies: `RealmSwift, RealmSwift (= 2.0.1)`.

You have either:
 * out-of-date source repos which you can update with `pod repo update`.
 * mistyped the name or version.
 * not added the source repo that hosts the Podspec to your Podfile.

Note: as of CocoaPods 1.0, `pod repo update` does not happen on `pod install` by default.

CocoaPods Environment

Stack

   CocoaPods : 1.0.1
        Ruby : ruby 2.0.0p648 (2015-12-16 revision 53162) [universal.x86_64-darwin15]
    RubyGems : 2.4.8
        Host : Mac OS X 10.11.6 (15G1004)
       Xcode : 7.3 (7D175)
         Git : git version 2.7.2
Ruby lib dir : /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib
Repositories : master - https://github.com/CocoaPods/Specs.git @ 3f0c8ff9097ab484f65e1b4ba3982c55f664e434

Installation Source

Executable Path: /Users/onejjy/.gemhome/bin/pod

Plugins

cocoapods-deintegrate : 1.0.1
cocoapods-plugins     : 1.0.0
cocoapods-search      : 1.0.0
cocoapods-stats       : 1.0.0
cocoapods-trunk       : 1.0.0
cocoapods-try         : 1.1.0

Podfile

use_frameworks!
target 'Demo' do
    pod 'RealmSwift'
end

Project that demonstrates the issue

Use a Podfile with a very dynamic pod (like Realm):

use_frameworks!
target 'mytarget' do
    pod 'RealmSwift'
end

Do a pod install.

Then, simulate that your local copy of master repo is outdated:

cd ~/.cocoapods/repos/master && git checkout `git rev-list -n 1 --before="2016-08-01 13:37" master`

(why 2016-08-01 at 13:37 ? Just because it's leet O'Clock, long time ago 馃槃 )

And then, do a pod install again.

If you want to clean up your local copy of the repo, just do a:
cd ~/.cocoapods/repos/master && git checkout master

easy confirmed enhancement

Most helpful comment

1st of the 2 improvements are done here thanks to @adellibovi! The final one left that was discussed:

Improved messaging when this scenario is detected (i.e. something to the effect of: versions of specs that exist in your Podfile do not exist in your local specs repo, please run pod install --repo-update)

All 13 comments

This feels like a dup of #5697, and I think the main concerns with this are addressed/discussed there. Thanks for filing!

Before creating this issue, I read #5697. I just read it again carefully to be sure that I did not miss something.

5697 do not address the issue in the same way. Here, the idea is different, and based on this sentence:

"Oh, hey, I see that PodFile.lock requires a version that I don't have on my repo local copy. Let's try to do a pod repo update and pod install again !"

So, I think this issue should be re-opened.

I think I'd be amenable to 2 improvements here:

  • Improved messaging when this scenario is detected (i.e. something to the effect of: versions of specs that exist in your Podfile do not exist in your local specs repo, please run pod install --repo-update)
  • A special return code from pod install when this scenario is detected. This way, CI can switch to pod install --repo-update as needed. This might also help with Github resource usage, since I'd guess many CI environments always use pod install --repo-update.

That's a good start, it totally fit the CI scenario.

But why don't go further and do the repo update automatically (only in this particular case) ? I don't see any drawback in this (maybe I'm wrong !).

I think it would make pod install a little bit smarter and easier to use for developers who don't understand how it works.

I concur with @Onejjy in the case where there is a Podfile.lock present and the current spec repos don't contain the change. Our CI currently always passes --repo-update today, so it would be less load on GitHub for us to do a transparent automatic update only in the case that the dependencies in the Podfile.lock (which presumably must exist somewhere or they wouldn't be in the Podfile.lock).

Having a special return code for CI works, but it makes extra work for everyone (humans and CI alike). I don't understand the downside of doing it automatically.

If there were a special exit code for a --repo-update required failure, I'd update our CI to use it, but maybe not everyone would. In fact, on the command line today, I always pass --repo-update to not have to invoke pod install twice (takes a minimum of 30s after we've done a lot of optimization, was ~2min before). I'd guess that having an automatic update would be less load on GitHub's servers than the proposed solution of making people do it manually, as it should be relatively infrequent that the Podfile.lock is updated.

I've spent some time thinking about this lately specifically in the domain of being able to improve CI caching for pod install's, especially when trying to avoid GitHub rate limiting on larger-scale deployments, and it's hard to find a solution that is reasonably clean.

For the general case, I think @benasher44's suggestions are the most reasonable and easily implementable, as an automatic solution requires significant complexity to be added to the analysis steps.

鈥s an automatic solution requires significant complexity to be added to the analysis steps.

Exactly, and given the added complexity + high stakes of getting this wrong (i.e. we regress to high usage levels of free Github resources), I think it makes sense to start making conservative improvements here.

Maybe we start with these enhancements. Then, someone creates a plugin that writes logic around the new exit code. And, we go from there.

I did not anticipate the complexity of the fully automatic solution. So, if it is: I think that's a good compromise to start with a special exit code !

1st of the 2 improvements are done here thanks to @adellibovi! The final one left that was discussed:

Improved messaging when this scenario is detected (i.e. something to the effect of: versions of specs that exist in your Podfile do not exist in your local specs repo, please run pod install --repo-update)

@benasher44 oops, I think I missed that, otherwise I would have insert in that PR.
I can complete this issue :)

@adellibovi no worries! Happy to see it happen across multiple PRs :)

Thank you guys for this great work ! It will definitely help us to reduce our CI failures !

Both parts of this issue have been addressed in the linked PRs. Closing.

Was this page helpful?
0 / 5 - 0 ratings