Dashboard: What happened to the cluster-service annotation?

Created on 14 Apr 2017  路  10Comments  路  Source: kubernetes/dashboard

https://github.com/kubernetes/dashboard/blob/b98d167dadaafb665a28091d1e975cf74eb31c94/src/deploy/kubernetes-dashboard.yaml

Why do the yamls in this repo not feature the cluster-service tag that causes it to appear in kubectl cluster-info ?

From some dashboard yaml that we imported ages ago and have been updating, for example, we had these:

   labels:
     kubernetes.io/cluster-service: "true"
     k8s-app: kubernetes-dashboard
    version: v1.5.1
kinfeature lifecyclfrozen

Most helpful comment

hmmm...I don't remember the full problem out of my head. The behavior you describe ist just one part. The other part is the addon manager. It uses this annotation to synchronizes the cluster state with static manifest files. The behavior was something like this:

  • addon manager reads a yaml from disk -> deploys the contents
  • addon manager reads all deployments from api server with annotation cluster-service:true -> deletes all that do not exist as files

As a result, if you add this annotation, addon manager will remove dashboard after a minute or so.

So,

At least this was the behavior some time ago. I think kubeadm does not use addon-manager. But it is still part of kube-up script.

All 10 comments

hmmm...I don't remember the full problem out of my head. The behavior you describe ist just one part. The other part is the addon manager. It uses this annotation to synchronizes the cluster state with static manifest files. The behavior was something like this:

  • addon manager reads a yaml from disk -> deploys the contents
  • addon manager reads all deployments from api server with annotation cluster-service:true -> deletes all that do not exist as files

As a result, if you add this annotation, addon manager will remove dashboard after a minute or so.

So,

At least this was the behavior some time ago. I think kubeadm does not use addon-manager. But it is still part of kube-up script.

Thanks for the very comprehensive answer. It looks like kubeadm doesn't use addon-manager. It deploys kube-dns by itself and users are expected to deploy dashboard (for now).

We're deploying dashboard with addon-manager, so we'll still need those. Thanks again for clarifying with such detail!

I have not tested it yet, but it seems that addon manager has been refactored:

https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/addon-manager/kube-addons.sh

I think we can use kubernetes.io/cluster-service: "true" by now

@cheld Addon manager is deprecating this label:

https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/addon-manager#addon-manager

However, kubectl cluster-info depends on it, so perhaps we can use it.

@maciaszczykm addon manager deprecates this label for usage in addon manager. It is not deprecated in general. In my understanding addon manager will ignore the label in case we also set addonmanager.kubernetes.io/mode=EnsureExists

@maciaszczykm addon manager deprecates this label for usage in addon manager. It is not deprecated in general.

@cheld Yes, that is what I have said.

In my understanding addon manager will ignore the label in case we also set addonmanager.kubernetes.io/mode=EnsureExists

Look at this fragment of documentation:

Addons have this label but without addonmanager.kubernetes.io/mode=EnsureExists will be treated as "reconcile class addons" for now.

In my opinion it means, that if kubernetes.io/cluster-service: "true" is set, but there is no addonmanager.kubernetes.io/mode set, then Addon Manager will set it to Reconcile. At least this is the way it works right now. I have tested it while I was updating Dashboard's version at core repository.

That's why we should also set the flag addonmanager.kubernetes.io/mode=EnsureExists

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

/remove-lifecycle stale

Was this page helpful?
0 / 5 - 0 ratings