kustomize got merged into kubectl we should remove the dependency if we detect kubectl version that is greater or equal to 1.14.
Fyi: at the moment this config throws no matches for kind \"Kustomization\"
apiVersion: skaffold/v1beta14
kind: Config
deploy:
kubectl:
manifests:
- ./kustomize/kustomization.yaml
flags:
apply:
- -k
git clone https://github.com/dkirrane/strimzi-skaffold-kustomise (See README)skaffold depoyno matches for kind \"Kustomization\" in version \"kustomize.config.k8s.io/v1beta1\"\n, err: exit status 1: exit statusFor me,
This configuration leads to an error
deploy:
kubectl:
flags:
global:
- -k
manifests:
- kustomizer/dev/kustomization.yaml
Error
kubectl --context <my-context> create -k --dry-run -oyaml -f <some-path>/kustomizer/dev/kustomization.yaml
Since I added the -k it seems only logical that it appears after the kubectl create but why does it appends -f when -k is already given ?
Evidently this is still a thing.
https://tilt.dev doesn't have this requirement currently.
Could you please describe the problem which caused revert, and the behavior you want to achieve eventually (support both built-in and tool on PATH, which one wins, do you support something like deploy:kustomize:kustomizePath/SKAFFOLD_KUSTOMIZE_PATH, etc.)?
@AndiDog the reason my "fix" didn't work is because the bundled version of kustomize in kubectl is apparently outdated, so some functionality that users are depending on wasn't there and their deploys were broken.
I think the right behavior in an ideal world would be to always use a binary on the PATH over kubectl's kustomize if it exists - that way users can have an escape hatch if they want to use newer functionality but we also remove the hard requirement on having kustomize installed. I don't think we'll need to support a kustomizePath field in the skaffold.yaml if we just check to see if the binary is installed on the user's PATH.
cc @briandealwis ref https://github.com/GoogleContainerTools/skaffold/pull/4183#issuecomment-640679742
Most helpful comment
For me,
This configuration leads to an error
Error
kubectl --context <my-context> create -k --dry-run -oyaml -f <some-path>/kustomizer/dev/kustomization.yamlSince I added the -k it seems only logical that it appears after the kubectl create but why does it appends -f when -k is already given ?