The k8s security update 1.7.14, 1.8.9, and 1.9.4 make configMaps readonly by default: CHANGELOG-1.7.md
Changes secret, configMap, downwardAPI and projected volumes to mount read-only, instead of allowing applications to write data and then reverting it automatically. Until version 1.11, setting the feature gate ReadOnlyAPIDataVolumes=false will preserve the old behavior. (#58720, @joelsmith)
This breaks the Registry and Clair deployments which try to chown the configMap mounted config files on startup, resulting in a crash loop. Logs from Clair:
chown: changing ownership of '/config/config.yaml': Read-only file system
Clair Entrypoint = docker-entrypoint.sh:
#!/bin/bash
set -e
chown -R 10000:10000 /config
sudo -E -H -u \#10000 sh -c "/dumb-init -- /clair2.0.1/clair -config /config/config.yaml"
set +e
Kubernetes version 1.9.5
Helm version 2.8.1
Thanks @MnrGreg for reporting.
/cc @reasonerjt to take a log on this chown issue.
@MnrGreg We are discussing how to solve this problem. Please stay tuned.
Thanks @jessehu. I'm using the --feature-flags in the interim.
I think the design change makes sense that as for all config volume we don't have to run chown.
I'll fix the Dockerfile
Unfortunately, we'll need more time to reproduce this issue on k8s and provide a fix.
It's possible this fix will be pushed out of 1.5.0 to avoid introducing regression when integrating with other products.
Fixed in master branch. Leave open until it's integrated into the helm chart.
@reasonerjt Hi,reasonerjt, I alse meet the problem锛宑an you tell me how to resolve it?
@messagell You can run kube-controller-manager --feature-gates ReadOnlyAPIDataVolumes=false as a workaround, then deploy harbor chart again.
Having just hit this issue trying to move to a new K8S 1.10.3 cluster I thought i'd note that this issue also affects 1.10 in the same way and that from 1.11 onwards it cannot be disabled with the feature gate
@philosifer Are you using the chart in master branch? The chart in master branch should have fixed this issue.
I just retried it on my dev cluster and its started up ok. I'll see if I can track down why it won't run on production. The main difference is using https for etcd on prod vs http only on dev but i can't see how that would affect the harbor pods.
I noticed that my (soon to be) production cluster was running 1.10.3 but the dev one that worked was 1.10.2. Couldn't work out what the problem was so upgrade both to 1.11.1 and then (after some persistent volume fixing work) I got the chart to work ok with the default values.yaml. My own custom version of it with passwords changed though gives me this in the logs for the clair pod.
time="2018-07-19T12:23:43Z" level=fatal msg="failed to load configuration" error="yaml: line 4: did not find expected key"
Any ideas or shall i move it to a new issue?
@philosifer Please use the latest version of Harbor chart in master branch.
@ywk253100 Tried to install harbor using helm today and I still get the same error. Using instructions from harbor-helm project.
K8s Version: 1.9.4
Helm Version 2.9.0
Harbor Helm master branch.
Also raised issue at harbor-helm: https://github.com/goharbor/harbor-helm/issues/22
Any help is really appreciated.
I get the same problem using the files from /make/kubernetes from master
k8s version: v1.11.2
I get this error also; and am working off the master branch
Most helpful comment
Fixed in master branch. Leave open until it's integrated into the helm chart.