Kubespray: Check the roles/plays in this repository with ansible-lint

Created on 29 Dec 2018  路  29Comments  路  Source: kubernetes-sigs/kubespray

Is this a BUG REPORT or FEATURE REQUEST? (choose one):

FEATURE REQUEST

Environment:

Not applicable

Description:

I'd like to propose that besides yamllint, all roles and plays/playbooks in this repository should also be checked with ansible-lint (https://github.com/ansible/ansible-lint).

feature good first issue help wanted lifecyclrotten prioritimportant-longterm

Most helpful comment

/remove-lifecycle rotten

There is still some work to be done here...

All 29 comments

we have thought of implementing this, but we have too much to fix for that to work correctly :) maybe in future, or if you want we can look at PR implementing this :)

I could try to do it check by check (a single PR that fixes everything at once would be horrible to review), it would be nice if someone could help me with adding the initial CI job with all checks disabled to the repository.

we are in stage to move our CI, so it would be no effort for now to do that as there might be some changes regarding that.

Do you want to move away from Gitlab CI alltogether or just run your integrations tests elsewhere?

first we will move tests and already investigating moving away from gitlab

Hm, do you mean by "it would be no effort for now to do that" that it would be a trivial (effortless) change, or that I should wait a bit until the migration has been done?

it was meant by now this should not be done :) let's wait migration...

Great, feel free to ping me once this has been done, then I'll prepare a PR.

I don't think that it would be beneficial to fix linting errors before the linter is available in CI, since it could easily happen that over time people commit code that would fail the linter again.

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Has been implemented in #4411

Well, sort of. I'll start with step 2, fixing the actual errors.

@MarkusTeufelberger I've started with #4508 #4500 and #4496

Edit: I'm thinking that going 1 error at a time can help to make it faster to review and avoid conflicts

Great! :+1:

For the record, here's the rough amount of errors when #4411 was merged:

  - '102'  # 1 error
  - '103'  # 2 errors
  - '104'  # 3 errors
  - '201'  # 3 errors
  - '204'  # approx 50 errors
  - '206'  # tons of errors
  - '301'  # approx 50 errors
  - '302'  # 1 error
  - '303'  # 15 errors
  - '305'  # ~30+ errors
  - '306'  # ~40 errors
  - '403'  # 5 errors
  - '404'  # 6 errors
  - '502'  # ~50 errors
  - '503'  # 12 errors
  - '504'  # 3 errors
  - '601'  # ~40 errors
  - '602'  # tons of errors
  - '701'  # 20 errors

Current state:

  • [x] 102 fixed in #4496
  • [x] 103 fixed in #4511
  • [x] 104 fixed in #4608
  • [x] 201 fixed in #4609
  • [x] 204 ignored in #4744
  • [x] 206 fixed in #4201/ #4699
  • [ ] 301 fixed in ???
  • [x] 302 fixed in #4609
  • [x] 303 fixed in #4619
  • [ ] 305 fixed in ???
  • [ ] 306 fixed in ???
  • [x] 403 fixed in #4500
  • [ ] 404 fixed in #4508
  • [x] 502 fixed in #4201/ #4743
  • [ ] 503 fixed in ???
  • [x] 504 fixed in #4666
  • [x] 601 fixed in #4499
  • [x] 602 fixed in #4665
  • [x] 701 ignored in #4744

On the still open ones:

  • 204 - good candidate for ignoring (long lines are very prevalent in this repository...)
  • 301 - needs expertise about what each command does and how to detect if it already ran in many cases to ensure idempotency
  • 305 - good candidate to still be fixed, quite straightforward
  • 306 - apparently has some issues on Red Hat derived distros? https://github.com/ansible/ansible-lint/issues/497
  • 503 - needs expertise to determine if a task really should be a handler that might run much later or if it really needs to run right at this moment
  • 701 - good candidate for ignoring (the roles in here are not really designed for galaxy)

nice work @MarkusTeufelberger @Miouge1 .

/remove-lifecycle stale

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

/remove-lifecycle rotten

There is still some work to be done here...

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

/remove-lifecycle rotten

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Since some global ignores were even added after I already fixed them(!), I'm no longer going to work on the remaining ones.

Still some work to be done by "someone" though.

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Was this page helpful?
0 / 5 - 0 ratings