This is a...
Problem:
the example does not work.
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-pv
annotations:
"volume.alpha.kubernetes.io/node-affinity": '{
"requiredDuringSchedulingIgnoredDuringExecution": {
"nodeSelectorTerms": [
{ "matchExpressions": [
{ "key": "kubernetes.io/hostname",
"operator": "In",
"values": ["example-node"]
}
]}
]}
}'
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
local:
path: /mnt/disks/ssd1
when i create it. my terminal show:
[root@controller:~/zxy]$ kubectl create -f example-local-pv.yml
The PersistentVolume "example-pv" is invalid:
* metadata.annotations: Forbidden: Storage node affinity is disabled by feature-gate
* spec.local: Forbidden: Local volumes are disabled by feature-gate
Proposed Solution:
fix the example.
Page to Update:
http://kubernetes.io/...
Maybe the solution is in https://github.com/kubernetes/kubernetes/issues/44339
xiangpengzhao commented on 17 Apr
In fact, setting nodeAffinity via annotations is still supported currently. (#41617) However, it's turned off by default. If you want it be turned on, you can add flag --feature-gates AffinityInAnnotations=true when starting kube-apiserver and kube-scheduler. But I think pod.spec.affinity is a more recommended way :)
cc @davidopp @timothysc
OK, thanks. Let me have a try 锛氾級
I add "--feature-gates=PersistentLocalVolumes=true" for kubelet and kube-apiserver, it works.
Thanks @roffe
@zhangxiaoyu-zidif sorry forgot the PersistentLocalVolumes.
But good it solved your problems! 馃憤
You do not need --feature-gates AffinityInAnnotations=true for PV node affinity. That flag is only for Pod NodeAffinity.
You also need to add the the PersistentLocalVolumes feature gate to to kube-scheduler.
@chenopis what's our policy on documenting which alpha feature gates need to be enabled and where?
@msau42 Thanks!
@msau42 It is a good question. With alpha features come and go in each release, it is hard to track and document each feature regarding where it should be enabled. Even there is such a table or matrix, I would suspect that no user will reference it. So I would suggest our users to enable a feature across all master components, as a realistic way to make sure it is fully applied.
With that said, I do see the value of such a matrix. If a feature is only targeting a kubelet, a user may have an optimization on rolling it out to all nodes. If another feature is only meaningful for the apiserver, applying it on the controller manager and then restart it doesn't sounds reasonable. This is for serious deployments. Maybe something like this?
| Feature | Status | Since | Till | apiserver | kcm | scheduler | kubelet | proxy |
|--------------------------|--------|-------|------|-----------|-----|-----------|---------|-------|
| APIListChunking | alpha | 1.8 | - | Y | | | Y | |
| AffinityInAnnotations | alpha | 1.6 | 1.7 | Y | Y | Y | Y | |
There are heroes working on this now: kubernetes/kubernetes#55067
@msau42 Generally, we don't document Alpha features. However, if you strongly feel it is needed, you can add the information to the relevant page and tag it w/ the following code, which gives notice about what version it's tied to and what the Alpha state means:
{% assign for_k8s_version="v1.8" %}{% include feature-state-alpha.md %}
@chenopis maybe we need to update the release procedures then? The release team has been pushing for documentation PRs to be created, even for alpha features.
@msau42 Sorry, my bad. I just double-checked, and it's fine to document alpha features for Kubernetes, since it's OSS. We just don't do that for GKE.
We should still tag the feature states in the docs w/ that include code so that we know to update them for each release.
Ok thanks for the clarification!
My k8s cluster is created with kubeamd1.9, it seems I don't have kube-apiserver, is there a workaround?
root@node40:/media/share# kubectl cluster-info
Kubernetes master is running at https://192.168.0.40:6443
KubeDNS is running at https://192.168.0.40:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
root@node40:/media/share# kubectl get po --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system etcd-node40 1/1 Running 4 1d
kube-system kube-apiserver-node40 1/1 Running 4 1d
kube-system kube-controller-manager-node40 1/1 Running 4 1d
kube-system kube-dns-6f4fd4bdf-zvhmn 3/3 Running 12 1d
kube-system kube-flannel-ds-ddmd8 1/1 Running 4 1d
kube-system kube-flannel-ds-phprc 1/1 Running 4 1d
kube-system kube-proxy-6qs4m 1/1 Running 4 1d
kube-system kube-proxy-qmnhz 1/1 Running 4 1d
kube-system kube-scheduler-node40 1/1 Running 4 1d
root@node40:/media/share# kubelet --feature-gates=AffinityInAnnotations=true
error: unrecognized key: AffinityInAnnotations
root@node40:/media/share# kubelet --feature-gates PersistentLocalVolumes=true
I0102 06:30:03.576755 29786 feature_gate.go:220] feature gates: &{{} map[PersistentLocalVolumes:true]}
I0102 06:30:03.577011 29786 controller.go:114] kubelet config controller: starting controller
I0102 06:30:03.577174 29786 controller.go:118] kubelet config controller: validating combination of defaults and flags
I0102 06:30:03.607692 29786 server.go:182] Version: v1.9.0
I0102 06:30:03.607722 29786 feature_gate.go:220] feature gates: &{{} map[PersistentLocalVolumes:true]}
I0102 06:30:03.607779 29786 plugins.go:101] No cloud provider specified.
W0102 06:30:03.607797 29786 server.go:328] standalone mode, no API client
W0102 06:30:03.666838 29786 server.go:236] No api server defined - no events will be sent to API server.
....
root@node40:/media/share# kubectl create -f LocalPV.yaml
The PersistentVolume "pv0001" is invalid:
* metadata.annotations: Forbidden: Storage node affinity is disabled by feature-gate
* spec.local: Forbidden: Local volumes are disabled by feature-gate
"values": ["node42"] is my node.
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0001
annotations:
"volume.alpha.kubernetes.io/node-affinity": '{
"requiredDuringSchedulingIgnoredDuringExecution": {
"nodeSelectorTerms": [
{ "matchExpressions": [
{ "key": "kubernetes.io/hostname",
"operator": "In",
"values": ["node42"]
}
]}
]}
}'
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
# storageClassName: local-storage
local:
path: /mnt/disks/ssd1
@YDD9 I think this is your apiserver:
the same issue here
when I add the --feature-gates=PersistentLocalVolumes=true,VolumeScheduling=true to the apiserver and scheduler, controller , my cluster died and I can't access it anymore, but when I only add the feature to the kubelet as following :
Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false --serialize-image-pulls=false --feature-gates=PersistentLocalVolumes=true,VolumeScheduling=true"
I get this error :
The PersistentVolume "dev-voli" is invalid:
Mar 18 08:21:01 hamza-master systemd[1]: Started kubelet: The Kubernetes Node Agent.
Mar 18 08:21:01 hamza-master systemd[1]: Starting kubelet: The Kubernetes Node Agent...
Mar 18 08:21:01 hamza-master kubelet[10661]: I0318 08:21:01.481481 10661 feature_gate.go:220] feature gates: &{{} map[PersistentLocalVolumes:true Volu meScheduling:true]}
Mar 18 08:21:01 hamza-master kubelet[10661]: I0318 08:21:01.481696 10661 controller.go:114] kubelet config controller: starting controller
Mar 18 08:21:01 hamza-master kubelet[10661]: I0318 08:21:01.481701 10661 controller.go:118] kubelet config controller: validating combination of defau lts and flags
Mar 18 08:21:01 hamza-master kubelet[10661]: I0318 08:21:01.498867 10661 server.go:182] Version: v1.9.2
Mar 18 08:21:01 hamza-master kubelet[10661]: I0318 08:21:01.498924 10661 feature_gate.go:220] feature gates: &{{} map[PersistentLocalVolumes:true Volu meScheduling:true]}
Mar 18 08:21:01 hamza-master kubelet[10661]: I0318 08:21:01.499033 10661 plugins.go:101] No cloud provider specified.
Any help ?
Hi @HamzaK8s, can you get the logs from your apiserver and scheduler to help see why they can't come up? The feature gate needs to be enabled on all 3 components.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Most helpful comment
I add "--feature-gates=PersistentLocalVolumes=true" for kubelet and kube-apiserver, it works.
Thanks @roffe