Is this a BUG REPORT or FEATURE REQUEST? (choose one):
FEATURE REQUEST
Environment:
Not applicable
Description:
I'd like to propose that besides yamllint, all roles and plays/playbooks in this repository should also be checked with ansible-lint (https://github.com/ansible/ansible-lint).
we have thought of implementing this, but we have too much to fix for that to work correctly :) maybe in future, or if you want we can look at PR implementing this :)
I could try to do it check by check (a single PR that fixes everything at once would be horrible to review), it would be nice if someone could help me with adding the initial CI job with all checks disabled to the repository.
we are in stage to move our CI, so it would be no effort for now to do that as there might be some changes regarding that.
Do you want to move away from Gitlab CI alltogether or just run your integrations tests elsewhere?
first we will move tests and already investigating moving away from gitlab
Hm, do you mean by "it would be no effort for now to do that" that it would be a trivial (effortless) change, or that I should wait a bit until the migration has been done?
it was meant by now this should not be done :) let's wait migration...
Great, feel free to ping me once this has been done, then I'll prepare a PR.
I don't think that it would be beneficial to fix linting errors before the linter is available in CI, since it could easily happen that over time people commit code that would fail the linter again.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Has been implemented in #4411
Well, sort of. I'll start with step 2, fixing the actual errors.
@MarkusTeufelberger I've started with #4508 #4500 and #4496
Edit: I'm thinking that going 1 error at a time can help to make it faster to review and avoid conflicts
Great! :+1:
For the record, here's the rough amount of errors when #4411 was merged:
- '102' # 1 error
- '103' # 2 errors
- '104' # 3 errors
- '201' # 3 errors
- '204' # approx 50 errors
- '206' # tons of errors
- '301' # approx 50 errors
- '302' # 1 error
- '303' # 15 errors
- '305' # ~30+ errors
- '306' # ~40 errors
- '403' # 5 errors
- '404' # 6 errors
- '502' # ~50 errors
- '503' # 12 errors
- '504' # 3 errors
- '601' # ~40 errors
- '602' # tons of errors
- '701' # 20 errors
Current state:
On the still open ones:
nice work @MarkusTeufelberger @Miouge1 .
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle rotten
There is still some work to be done here...
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle rotten
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Since some global ignores were even added after I already fixed them(!), I'm no longer going to work on the remaining ones.
Still some work to be done by "someone" though.
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Most helpful comment
/remove-lifecycle rotten
There is still some work to be done here...