Running skaffold deploy --label skaffold.dev/run-id="" ... with v1.0.0 on an existing deployment made with v0.41 should not attempt to update specs it is not allowed to.
Skaffold seems to attempt to patch volumeClaimTemplates in my statefulsets prompting the error:
- for: "STDIN": StatefulSet.apps "keydb-store" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', and 'updateStrategy' are forbidden
Overriding the label app.kubernetes.io/managed-by with a static value works around the issue.
There's a similar issue with jobs but the error here is field is immutable. Overriding the managed-by label seems to work around this as well.
apiVersion: skaffold/v1
kind: Config
deploy:
kustomize:
path: .
@taisph Hello, could you please share your config to override labels. I am new to all of this and trying to fix my deployment that fails with the same error as yours. Thank you for help.
@logart Here's the full anonymized command I currently use in the ops pipeline with skaffold v1.0.0:
skaffold deploy \
--filename=../../skaffold.yaml \
--default-repo=gcr.io/project-id \
--build-artifacts=../../builds/project-id-debug/build.json \
--kube-context=gke_project-id_region_k8s-cluster-1 \
--label skaffold.dev/run-id="static" \
--label app.kubernetes.io/managed-by="skaffold"
I've encountered a similar problem on any subsequent execution of "skaffold run", I think this is because Skaffold attempts to update volumeClaimTemplate labels (e.g. run-id).
Also this is somewhat related to https://github.com/kubernetes/kubernetes/issues/68737
@taisph thanks for filing. run-id did introduce a bit of weirdness that we haven't quite sorted out yet, but I'm not sure how to try and repro this issue. do you think you might be able to give me a small example project to work with?
I am having the same problem here.
When I use a StatefulSet any update to the cluster by rerunning skaffold run ends in the error:
the StatefulSet "my-sts" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', and 'updateStrategy' are forbidden
FATA[0000] kubectl error: kubectl apply: exit status 1
This happens:
Also encountering this issue, I've been able to work around it by dropping the following into my kustomization.yaml
commonLabels:
skaffold.dev/run-id: static
app.kubernetes.io/managed-by: skaffold
Attempted to deploy from a host without a local docker installation and skaffold felt the need to change skaffold.dev/docker-api-version from "1.40" to null so now I have to add that to my list of overridden skaffold labels. I don't even understand the value of that particular label in a remotely deployed manifest. :thinking:
Patch output from kubectl via skaffold 1.7.0 (Google Cloud SDK version):
{
"metadata": {
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "redacted"
},
"labels": {
"skaffold.dev/docker-api-version": null
}
},
"spec": {
"template": {
"metadata": {
"labels": {
"skaffold.dev/docker-api-version": null
}
}
},
"volumeClaimTemplates": [
{
"metadata": {
"labels": {
"app.kubernetes.io/managed-by": "skaffold",
"skaffold.dev/builder": "local",
"skaffold.dev/cleanup": "true",
"skaffold.dev/deployer": "kustomize",
"skaffold.dev/namespace": "default",
"skaffold.dev/run-id": "static",
"skaffold.dev/tag-policy": "git-commit",
"skaffold.dev/tail": "true"
},
"name": "my-statefulset-data"
},
"spec": {
"accessModes": [
"ReadWriteOnce"
],
"resources": {
"requests": {
"storage": "1Gi"
}
},
"storageClassName": "fast"
}
}
]
}
}
Also encountering this issue, I've been able to work around it by dropping the following into my
kustomization.yamlcommonLabels: skaffold.dev/run-id: static app.kubernetes.io/managed-by: skaffold
Be careful with this approach. Those gets added to deployment selector.matchLabels and service selectors as well.
I'm running into this as well. Until there's some fix for this in skaffold, I've removed references to my stateful set in my skaffold config (so I just manually update that now).
Most helpful comment
Also encountering this issue, I've been able to work around it by dropping the following into my
kustomization.yaml