See my talk on how to do it, but you probably already know ;)
https://sched.co/FuKB
Sample repo: https://github.com/luxas/sample-config
Yeah! Definitely one of the reasons I used go, we should get around to fixing this :-)
/assign @munnerz
BTW, the codecs in api-machinery are bound to json-iter and do not support strict unmarshalling. json-iter supports it except that it's options are hardcoded.
so if there is a plan for a switch to unmarshaling from codecs instead of plain yaml.UnsmarshalStrict, possibly yaml.UnmarshalStrict has to be moved as a pre-step before decoding.
oh that is an interesting problem. I'm not _overly_ attached to strict unmarshaling, I actually had an ulterior motive of seeing how well switching went before switching prow's config, but it does seem helpful to actually error on unknown fields..
possibly yaml.UnmarshalStrict has to be moved as a pre-step before decoding.
I'm not sure I follow there, what would that look like roughly?
@neolit123 I think this should be discussed in the componentconfig KEP threads instead of here, which is more the implementation.
But yeah, +1 to warn on unknown fields
I'm not sure I follow there, what would that look like roughly?
sigs.k8s.io/yaml.UnmarshalStrict(bytes, externalConfigStructInstance) // #1
...
k8s.io/apimachinery/pkg/runtime.DecodeInto(myScheme.Codecs.UniversalDecoder(), bytes, internalConfigStructInstance);
validation happens at #1.
commented: https://github.com/kubernetes/community/pull/2354#issuecomment-439631806
i think that was the KEP thread.
Hmm, I think we should teach the decoder to handle strict instead then, our homebrew GVK thing works OK if not being the best thing to maintain long term. It seems like everyone will benefit from a strict option as discussed here now by @neolit123 and co. https://github.com/kubernetes/community/pull/2354#issuecomment-439633827
@BenTheElder @neolit123 Please redirect discussion to actionable, generic work items of the upcoming k8s.io/component repo you'll utilize. https://docs.google.com/document/d/1nZnzJD9dC0xrtEla2Xa-J6zobbC9oltdHIJ3KKSSIhk/edit
Thanks!
@BenTheElder @munnerz is there any chance that I could help with this? :-)
currently we aren't configuring things covered by k8s.io/component directly (potentially kubeadm is). we'll adopt it when we configure them.
@BenTheElder using https://github.com/luxas/sample-config as a reference, you can adopt ComponentConfig meanwhile. Whether you need k8s.io/component or not is to be seen later when it exists but meanwhile it makes sense to adopt the common k8s API machinery I think
kind is already doing parts of the componentconfig spec:
https://github.com/kubernetes-sigs/kind/tree/master/pkg/cluster/config
That's very cool, thanks @fabriziopandini. Do you think there's anything more that needs to be done yet or is this initially done?
@luxas after https://github.com/kubernetes-sigs/kind/pull/147 merge there will be internal/external config, conversions, defaulting, scheme that is what required for implementing multi-node.
Let's do a pair code review afterwards to see if something is missing/could be improved
Let's do a pair code review afterwards to see if something is missing/could be improved
Sounds good @fabriziopandini!
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
this timeout out it seems.
kind already supports component configuration.
once component-base is ready for wider consumption some of it's utils can be leveraged here if appropriate.
Most helpful comment
@neolit123 I think this should be discussed in the componentconfig KEP threads instead of here, which is more the implementation.