EDIT: neolit123
see comments here for an update of the problem:
https://github.com/kubernetes/kubeadm/issues/1953#issuecomment-589989066
TL;DR we need to print a warning if the user has a patch that does not match to one of our objects name/namespace/GVK that we support.
BUG REPORT
kubeadm version:
kubeadm version: &version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.3", GitCommit:"b3cbbae08ec52a7fc73d334838e18d17e8512749", GitTreeState:"clean", BuildDate:"2019-11-13T11:20:25Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
Environment:
No kustomize was applied when trying to patch a static pod manifest. I was running kubeadm init together with the --experimental-kustomize (or -k) flag and pointing it at a kubeadm-patches folder containing kustomization.yaml + patchesjson6902 patch to try and achieve this.
I would expect kustomization to get applied or to give me any error or reason for why it wasn't applied. I would also expect any loglines (at least with --v=5) containing [kustomize] as I specified the --experimental-kustomize flag.
#!/usr/bin/env bash
mkdir -p /tmp/kubeadm-patches/
cat >/tmp/kubeadm-patches/kustomization.yaml <<EOF
patchesJson6902:
- target:
version: v1
kind: Pod
name: kube-apiserver
path: add-service-account-key-file.yaml
EOF
cat >/tmp/kubeadm-patches/add-service-account-key-file.yaml <<EOF
- op: add
path: /spec/containers/0/command/-
value: --service-account-key-file=/tmp/additional-issuer.pub
EOF
kubeadm init --experimental-kustomize /tmp/kubeadm-patches/
}
When copying /etc/kubernetes/manifests/kube-apiserver.yaml to /tmp/kubeadm-patches/ and adding kube-apiserver.yaml as a resource to the kustomize.yaml, it works fine when running kubectl kustomize /tmp/kubeadm-patches/.
Here's a gist for that:
Running --experimental-kustomize in the same environment, using a very simple example which adds a k/v to metadata works fine:
```$ cat /foo/patch1.yaml
apiVersion: v1
kind: Pod
metadata:
name: kube-apiserver
namespace: kube-system
foo: bar
$ kubeadm init -experimental-kustomize /foo/ --v=5
...
I1202 21:22:58.251840 19327 manifests.go:91] [control-plane] getting StaticPodSpecs
[kustomize] Applying 1 patches to /v1, Kind=Pod Resource=kube-system/kube-apiserver
...```
/assign
@dnmgns I was trying to reproduce your issue and got kubeadm init successfully executed.
kubeadmm1@kubeadmm1-VirtualBox:~$
sudo kubeadm init --experimental-kustomize /tmp/kubeadm-patches/ --apiserver-advertise-address=192.168.99.100 --ignore-preflight-errors=NumCPU
W0131 11:01:41.250434 5455 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0131 11:01:41.250552 5455 validation.go:28] Cannot validate kubelet config - no validator is available
[init] Using Kubernetes version: v1.17.2
[preflight] Running pre-flight checks
[WARNING NumCPU]: the number of available CPUs 1 is less than the required 2
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubeadmm1-virtualbox kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.99.100]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kubeadmm1-virtualbox localhost] and IPs [192.168.99.100 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kubeadmm1-virtualbox localhost] and IPs [192.168.99.100 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0131 11:01:48.680263 5455 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0131 11:01:48.682556 5455 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 30.001906 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node kubeadmm1-virtualbox as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node kubeadmm1-virtualbox as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: l66hyo.eoh9d53drlow7gwi
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.99.100:6443 --token l66hyo.eoh9d53drlow7gwi \
--discovery-token-ca-cert-hash sha256:928a39322eeb8b641a468711a5a5174db64e4f8c3d7c65c229354dad2446a604
@dnmgns Is there any missing step, or any additional configuration that needs to be done apart from above, kindly let me know
@RA489 - The issue is not that kubeadm init doesn't get successfully executed. There's no additional configuration that needs to be done apart from above.
It seems that you were able to reproduce the issue.
In the output from your kubeadm init command - please notice that there's a lack of any output from kustomize ([kustomize]). The issue here is that kustomize doesn't seem to be running during the init - wouldn't you expect some output from it when running that command?
The same issue, put in other words, is that:
kubectl kustomize /tmp/kubeadm-patches/ applies the patches from that path and produces the output:
[kustomize] Applying 1 patches to /v1, Kind=Pod Resource=kube-system/kube-apiserver
kubeadm init --experimental-kustomize /tmp/kubeadm-patches/ does NOT apply the patches from that path and doesn't produce any [kustomize] output - is it even running during init?
kubeadm should print [kustomize] ... lines
https://github.com/kubernetes/kubernetes/blob/fa3dfa82b0ae05cba900b468698f394a2a184ba9/cmd/kubeadm/app/util/kustomize/kustomize.go#L192
so there might be a bug at play here.
@dnmgns I have same issue, how did you solve it?
I want to patch static pod container command when kubedam init, use kubeadm init --experimetal-kustomize not works. Now i use kubectl kustomize to solve it when init.
The steps like below:
#!/usr/bin/env bash
kubeadm init phase etcd local --config=kubeadm.yaml
mkdir -p /tmp/kubeadm-patches/
mv /etc/kubernetes/manifests/etcd.yaml /tmp/kubeadm-patches/etcd.yaml
cat >/tmp/kubeadm-patches/kustomization.yaml <<EOF
resources:
- etcd.yaml
patchesJson6902:
- target:
version: v1
kind: Pod
name: etcd
path: etcd-commands-patch.yaml
EOF
cat >/tmp/kubeadm-patches/etcd-commands-patch.yaml <<EOF
- op: replace
path: /spec/containers/0/command/1
value: --advertise-client-urls=https://192.168.100.236:2379
- op: replace
path: /spec/containers/0/command/6
value: --initial-advertise-peer-urls=https://192.168.100.236:2380
- op: replace
path: /spec/containers/0/command/11
value: --listen-peer-urls=https://192.168.100.236:2380
- op: replace
path: /spec/containers/0/command/12
value: --name=k8s-236
EOF
kubectl kustomize /tmp/kubeadm-patches/ > /etc/kubernetes/manifests/etcd.yaml
@dnmgns @pytimer
to workaround the issue make sure you pass the namespace:
version: v1
kind: Pod
name: ....
namespace: kube-system
i do not understand why kustomize in kubectl tolerates the lack of namespace, but kubeadm needs it.
options to improve kubeadm here:
1) make kubeadm print a verbose message when it cannot find patches for an object:
if patchesCnt == 0 {
klog.V(1).Infof("[kustomize] Did not find any patches for %s Resource=%s/%s", resource.GroupVersionKind(), resource.GetNamespace(), resource.GetName())
return data, nil
}
2) make kubeadm print a warning when the user kustomization does not match a known object to patch.
option 2 feels better.
I assume there is some disalignement in kustomize versions between kubeadm and kubectl, but probably fixing this isn't easy while we are still in k/k
However, as a stop gap, I'm +1 to option 2
@dnmgns @pytimer
to workaround the issue make sure you pass the namespace:
version: v1 kind: Pod name: .... namespace: kube-systemi do not understand why kustomize in kubectl tolerates the lack of namespace, but kubeadm needs it.
@neolit123 Thanks, add namespace in kustomization.yaml, kubeadm init --experimetal-kustomize works well.
options to improve kubeadm here:
- make kubeadm print a verbose message when it cannot find patches for an object:
if patchesCnt == 0 { klog.V(1).Infof("[kustomize] Did not find any patches for %s Resource=%s/%s", resource.GroupVersionKind(), resource.GetNamespace(), resource.GetName()) return data, nil }
- make kubeadm print a warning when the user kustomization does not match a known object to patch.
option 2 feels better.
option 2 +1
@pytimer - I actually skipped performing this operation with kustomize.
I found out that I could achieve the same thing just by adding extraArgs to the ClusterConfiguration with kubeadm. In this case I could just add service-account-signing-key-file.
Here's an example:
For example:
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
controlPlaneEndpoint: xxx:6443
apiServer:
certSANs:
- $dns
- $ip
extraArgs:
cloud-provider: aws
service-account-signing-key-file: /some/path/to/oidc-issuer.key
service-account-issuer: $issuer
api-audiences: sts.amazonaws.com
timeoutForControlPlane: 5m0s
certificatesDir: /etc/kubernetes/pki
clusterName: some-cluster
@neolit123 - Thanks for letting me know why the patch wasn't applied!
/retitle kustomize: print warning if no matching objects were found
+1 on option 2
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
we will possibly omit further patches for --experimental-kustomize due to:
https://github.com/kubernetes/kubeadm/issues/2046
we will possibly omit further patches for --experimental-kustomize due to:
2046