The branchprotector override mechanism is amazing and we should use it in every config where people can configure orgs or repos, etc, to allow for more specific overrides. More specifically, plugins like approve have a flat list of configs and don't have any override logic, but users expect it and commit configuration that does nothing.
/area prow
/kind bug
/cc @fejta @cjwagner @BenTheElder
A common library that provides helper functions for handling these types of configs could be a nice way to standardize how repo specific configuration is specified. A plugin could just provide a leaf config struct and the complete config struct with global, org, repo, and branch specific levels would be provided by the library.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
This might be an interesting medium sized feature for a new contributor to work on. It has a bit more complexity than other "help wanted" issues, but it doesn't require much familiarity with Prow.
/help
/remove-lifecycle rotten
I would like to work on that, can I get a bit of guidance, please?
We don't have the design fully worked out for this one yet, but I was thinking of a common library like the one described here: https://github.com/kubernetes/test-infra/issues/9757#issuecomment-428647392. In particular I think you could closely follow the pattern that branch protector does and generalize it with a leaf config interface that exposes parsing and merging functionality.
I'm not sure if a complete design document is necessary for this, but it would be great if you could run your design by us in some form or another.
Thanks @cjwagner, I will try my best and do a couple of iterations with test-infra until we have something acceptable.
Alright, so basically the output of this issue will be:
branch-protection section of config.yamlbranch-protection section of config.yamlIs that correct?
@cjwagner @stevekuznetsov could you confirm these assumptions and the location/name for the helper allowing me to propose something in a PR? (I think it will be easier to comment directly on some code)
I think it's not so much deck, plank etc that need to be changed -- it's wherever we currently already have a per-repo configuration exposed. Most of the plugins (but not all) do this. I'm not opposed to doing this generically for everything but that seems like a huge change.
@cjwagner
Yeah, this pattern only makes sense for things that have repo specific configuration so lets only use it in those cases.
@stevekuznetsov here is a proposal format for approval in plugins.yaml:
approve:
default:
issue_required: false
require_self_approval: false
lgtm_acts_as_approve: false
ignore_review_state: true
orgs:
bazelbuild:
default:
ignore_review_state: false
kubernetes:
default:
lgtm_acts_as_approve: true
repos:
kops:
lgtm_acts_as_approve: false
kubernetes:
issue_required: true
It is inspired by labels.yaml and is a bit different from branchprotector, but seems more logical to me:
I think the default sections add an unnecessary extra layer. Can we embed the defaults struct or just use its fields instead? I think config would ideally look like:
approve:
issue_required: false
require_self_approval: false
lgtm_acts_as_approve: false
ignore_review_state: true
orgs:
bazelbuild:
ignore_review_state: false
kubernetes:
lgtm_acts_as_approve: true
repos:
kops:
lgtm_acts_as_approve: false
kubernetes:
issue_required: true
@cjwagner there is something I don't like about the current approve config, and it's RequireSelfApproval and IgnoreReviewState being pointers. Because of that we have to specify them everywhere, which complicates the config.
I suggest we move away from these pointers, and rely on the default false value for a bool. It would look like that:
approve:
orgs:
helm:
repos:
charts:
lgtm_acts_as_approve: true
kubernetes:
lgtm_acts_as_approve: true
repos:
community:
lgtm_acts_as_approve: false
kops:
lgtm_acts_as_approve: false
kubernetes:
lgtm_acts_as_approve: false
org:
lgtm_acts_as_approve: false
test-infra:
lgtm_acts_as_approve: false
kubernetes-incubator:
repos:
ip-masq-agent:
lgtm_acts_as_approve: true
Or maybe you prefer to avoid the off/on/off in kubernetes for lgtm_acts_as_approve in favor of:
approve:
orgs:
helm:
repos:
charts:
lgtm_acts_as_approve: true
kubernetes:
repos:
cloud-provider-aws:
lgtm_acts_as_approve: true
cloud-provider-azure:
lgtm_acts_as_approve: true
cluster-registry:
lgtm_acts_as_approve: true
contrib:
lgtm_acts_as_approve: true
dashboard:
lgtm_acts_as_approve: true
dns:
lgtm_acts_as_approve: true
enhancements:
lgtm_acts_as_approve: true
examples:
lgtm_acts_as_approve: true
federation:
lgtm_acts_as_approve: true
gengo:
lgtm_acts_as_approve: true
ingress-gce:
lgtm_acts_as_approve: true
ingress-nginx:
lgtm_acts_as_approve: true
klog:
lgtm_acts_as_approve: true
kube-deploy:
lgtm_acts_as_approve: true
kubeadm:
lgtm_acts_as_approve: true
kubectl:
lgtm_acts_as_approve: true
kubernetes-template-project:
lgtm_acts_as_approve: true
kube-state-metrics:
lgtm_acts_as_approve: true
minikube:
lgtm_acts_as_approve: true
node-problem-detector:
lgtm_acts_as_approve: true
perf-tests:
lgtm_acts_as_approve: true
publishing-bot:
lgtm_acts_as_approve: true
release:
lgtm_acts_as_approve: true
repo-infra:
lgtm_acts_as_approve: true
sig-release:
lgtm_acts_as_approve: true
steering:
lgtm_acts_as_approve: true
utils:
lgtm_acts_as_approve: true
kubernetes-incubator:
repos:
ip-masq-agent:
lgtm_acts_as_approve: true
I have the feeling I need a small utility to verify all these configs are equivalent... otherwise the review will be painful.
RequireSelfApproval and IgnoreReviewState being pointers. Because of that we have to specify them everywhere, which complicates the config.
I don't follow. We don't have to specify them everywhere, only where they differ from the defaults and we need pointers to distinguish if the value is specified at each level so we know if a value is being overridden.
Oh, ok. I thought it would explain why the config was filled with lgtm_acts_as_approve: false everywhere...
/assign
/remove-help
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle rotten
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
Ok fejta-bot, I will finish that one!
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
Most helpful comment
I think the
defaultsections add an unnecessary extra layer. Can we embed thedefaultsstruct or just use its fields instead? I think config would ideally look like: