The use case I have is in development I want all my services to use a ClusterIP vs LoadBalancer enumerating all the services I want to mutate isn't something I look forward to maintaining.
My proposal is we adjust GvknEquals to look like:
func (n ResId) GvknEquals(id ResId) bool {
return n.gvKind.Equals(id.gvKind) && (n.name == id.name || id.name == "*")
}
It would allow usage like:
patchesJson6902:
- target:
version: v1
kind: Service
name: "*"
path: patches/service_type_patch.json
I've been running this locally and believe it would also resolve #412
I can have a PR in short order if this is deemed an acceptable solution.
I think this would be very useful! I would also consider using ok, _ := filepath.Match(id.name, n.name) to support pattern matching e.g. name: "*-deployment"
I'm suspicious of wildcards in config, because by definition they have unintended side effects. They work great for some use case, then cause an outage in some other unanticipated work flow.
As you mention, you don't look forward to maintaining a bunch of Service patch directives (me neither).
Another approach would be to provide a kustomization file editing command to make it easier to specify
multiple names in a patch directive.
We should have some discussion of the downside of wildcards (if any) before adding this, since the door cannot be closed later. For more background, see
https://github.com/kubernetes-sigs/kustomize/blob/master/docs/eschewedFeatures.md#globs-in-kustomization-files
It only mentions file name arguments, not "names", or "kinds", but its related.
Perhaps some form of wildcarding would be OK with names and kinds. But lets find a downside to discuss.
Agreed I think this would be a different conversation if we were talking about allowing wildcards in places beyond the name. I'm not comfortable with allowing wildcards for the Kind attribute as the spec shapes vary so extensively all bets would be off on the outcome of a patch application.
Sounds also related: #409
Hi,
I'm interested in this feature but I was thinking about using label selectors instead of wildcards.
This way we have more control of items under scope.
Summarize the requests in #720. Close this one as duplication.
I don't see how #720 summarizes this issue. This issue here, as far as I understand, talks about patching multiple resources that are not necessarily known when creating the patch.
The use case that interests me is where provisioning an infrastructure would also create a patch that would add infrastructure specific annotations.
For instance:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/backend-protocol: HTTP
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/certificate-arn: 'arn:aws:acm:us-east-1:XXXXXXXX:certificate/YYYYYYYYYYYYYYYYYYYY'
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
The idea is to include this patch in one or more independent apps that will be deployed to this kubernetes cluster.
Most helpful comment
I don't see how #720 summarizes this issue. This issue here, as far as I understand, talks about patching multiple resources that are not necessarily known when creating the patch.
The use case that interests me is where provisioning an infrastructure would also create a patch that would add infrastructure specific annotations.
For instance:
The idea is to include this patch in one or more independent apps that will be deployed to this kubernetes cluster.