Does it apply to Argo CD managed helm charts? Can it be overwritten/overcome somehow?
This error is happening during kubectl apply phase for one of your ConfigMap resources, and is not just specific to helm. We use kubectl apply to perform deployments. One of the downsides of using kubectl apply is that it stores the entire spec as an annotation in the object (which it uses to understand how handle defaulted vs. deleted fields). What's happening is that the data fields in the ConfiMap is likely exceeding 262144 characters enforced by the K8s API server, and thus it cannot fit in the last-applied-configuration kubectl annotation.
Theres not a workaround other than a Sync hook which performs the deploy in a manner different than apply (e.g. kubectl replace). Which helm chart is causing problems?
This is a custom made helm chart with a bit config map that I currently need to deploy with kubectl create. Could you please reference an example with a Sync hook you mentioned?
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
This error is happening during kubectl apply phase for one of your ConfigMap resources, and is not just specific to helm. We use
kubectl applyto perform deployments. One of the downsides of usingkubectl applyis that it stores the entire spec as an annotation in the object (which it uses to understand how handle defaulted vs. deleted fields). What's happening is that thedatafields in the ConfiMap is likely exceeding 262144 characters enforced by the K8s API server, and thus it cannot fit in thelast-applied-configurationkubectl annotation.Theres not a workaround other than a Sync hook which performs the deploy in a manner different than apply (e.g. kubectl replace). Which helm chart is causing problems?
@jessesuen This issue occurs when install Grafana with a large dashboard using ConfigMaps.
Wouldn't it be better to use kubectl create or kubectl replace?
I am also curious to the sync hook workaround.
I also have problem with grafana.
For the workaround, try adding these annotations to the resources that need to be deleted before apply.
annotations:
"helm.sh/hook": pre-install
"helm.sh/hook-weight": "-1"
"helm.sh/hook-delete-policy": before-hook-creation
I also have problem with grafana.
For the workaround, try adding these annotations to the resources that need to be deleted before apply.annotations: "helm.sh/hook": pre-install "helm.sh/hook-weight": "-1" "helm.sh/hook-delete-policy": before-hook-creation
Using these annotations seems to have kinda-worked... I still get caught up on a few of the larger config-maps. They just won't sync with the same error.
I also have problem with grafana.
For the workaround, try adding these annotations to the resources that need to be deleted before apply.annotations:
"helm.sh/hook": pre-install
"helm.sh/hook-weight": "-1"
"helm.sh/hook-delete-policy": before-hook-creation
This does not seem to work in case of a grafana dashboards configmap. Please suggest a workaround / fix
I also have problem with grafana.
For the workaround, try adding these annotations to the resources that need to be deleted before apply.
annotations:
"helm.sh/hook": pre-install
"helm.sh/hook-weight": "-1"
"helm.sh/hook-delete-policy": before-hook-creationThis does not seem to work in case of a grafana dashboards configmap. Please suggest a workaround / fix
Same problem, annotations doesnt seem to work in case of Grafana chart.
Most helpful comment
@jessesuen This issue occurs when install Grafana with a large dashboard using ConfigMaps.
Wouldn't it be better to use
kubectl createorkubectl replace?I am also curious to the sync hook workaround.