By default, grafana, dashboard are disabled.
AFAIK there is no way to do this with kops directly. The easiest way is to run kubectl apply -f addons/kubernetes-dashboard/v1.6.0.yaml after your cluster has started.
I think the best way to handle addons in the future would be something like helm, to avoid having to maintain the same addons in kubernetes/kubernets/cluster/addons and here in kubernetes/kops/addons/.
This may be a good place to add an optional field in the cluster config yaml to provide either addon yamls or helm manifests that are executed upon cluster creation, like
spec:
...
addons:
- /path/to/addon/yaml/here
- /path/to/another/addon
or by extension, specify which charts should be executed on creation.
I think both are worth considering because not everyone is using helm.
Has there been any work on adding this field to cluster spec for Helm Charts?
@so0k we are assessing the possibility of adding an MVP for package management. It probably will not support helm initially, but vanilla manifests. Please share specific requirements here!
/label area/addon-manager
/area addon-manager
ok, for now we started using the channels tool:
channels apply channel kubernetes-dashboard --yes
channels apply channel monitoring-standalone --yes
works great for static yaml, and we have our own channel:
channels apply channel -f beekeeper.yaml --yes
we'd like to be able to pass that channel into the cluster spec. (or specify in the cluster spec we want the kubernetes-dashboard and monitoring-standalone after the bootstrap)
channels get addons
NAMESPACE NAME VERSION CHANNEL
kube-system core.addons.k8s.io 1.4.0 s3://<state-bucket>/<cluster>/addons/bootstrap-channel.yaml
kube-system dns-controller.addons.k8s.io 1.6.1 s3://<state-bucket>/<cluster>/addons/bootstrap-channel.yaml
kube-system honestbee.rbac.k8s.io 1.5.2 beekeeper.yaml
kube-system kube-dns.addons.k8s.io 1.6.1-alpha.2 s3://<state-bucket>/<cluster>/addons/bootstrap-channel.yaml
kube-system kubernetes-dashboard 1.6.1 kubernetes-dashboard
kube-system limit-range.addons.k8s.io 1.5.0 s3://<state-bucket>/<cluster>/addons/bootstrap-channel.yaml
kube-system monitoring-standalone 1.6.0 monitoring-standalone
kube-system storage-aws.addons.k8s.io 1.6.0 s3://<state-bucket>/<cluster>/addons/bootstrap-channel.yaml
kube-system tiller.addons.k8s.io 2.5.2 beekeeper.yaml
when we can't bootstrap static yaml files, we switch to helm deployments
So yet another tool besides Helm and Kubernetes Addon Manager? 馃槱
Channels is slightly different as it only manages the installation and keeping track of updates, addon manager is an active component in the cluster ensuring the resources stay deployed - if I'm not wrong?
Also, channels is about 2 years old and has been at the core of nodeup since the start?
maybe if ksonnet support was added to channels...
@fhemberger we are not trying to replace helm, but you may want something to install tiller ;) This is more orchestration than package management.
the cluster-autoscaling addon channel is quite broken (it doesn't have the correct name for the addon.yaml file and it requires templating (hence... the need for ksonnet support)
@chrislovecnm I see. But usually, there are at least 10 ways to do anything in Kubernetes. And the community keeps propelling that by creating even more tools. That's what I'm a bit concerned about.
I'd love to see a way to install one single maintained way to install stuff in Kubernetes itself and kops. Otherwise it's too hard to keep addons in sync across the different toolchains.
there are many distributions for equally many use cases
@so0k How many different use cases are there to install stuff like Heapster? 馃槈
@fhemberger that is a much bigger question, that this project cannot answer :) @justinsb did the addon manager ever go anywhere in sig-arch?
We already have the YAML for add-ons, and the file structure defined. I would just need to wire in the capability for a kind: Addons to be read by kops create -f. We can automatically add those addons from disk. The challenge is the kops edit and kops update workflow. I am wondering if we should add addons to the cluster spec. @justinsb any ideas?
@chrislovecnm - in our case, we started by using channels for basic addons (tiller, some custom rbac roles), but had to extend it to include more advanced cases as follows:
We are managing all our infrastructure with Terraform, so kops generates the cluster terraform definitions as modules of a bigger plan. Another module uses Terraform templates to render some AWS resources into Addon manifests. Helm is perfect for templated bootstrapping, but some helm charts didn't work (cluster autoscaler), where the kops addon did (but requires a bunch of sed statements) - Terraform templates provided the perfect middle ground.
If we were able to have our custom addons channel url s3://path/to/our/custom/addons added to the clusterSpec or instanceGroup spec, so that it is added by kops into the kube_env.yaml conf used by nodeup...:
Assets:
-https://storage.googleapis.com/kubernetes-release/release/v1.6.8/bin/linux/amd64/kubelet
...
ClusterName: my-cluster
ConfigBase: s3://kops-state-store/my-cluster
InstanceGroupName: ...
Tags:
- ...
channels:
- s3://kops-state-store/my-cluster/addons/bootstrap-channel.yaml
- s3://path/to/our/custom/addons
protokubeImage:
name: protokube:1.6.2
source: https://kubeupv2.s3.amazonaws.com/kops/1.6.2/images/protokube.tar.gz
we would ensure to make it available for protokube and the cluster would bootstrap with our addons we'd like, ... (currently, we are using a make target make bootstrap-channels)
or is this approach too much tied to implementation details?
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Prevent issues from auto-closing with an /lifecycle frozen comment.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale
/lifecycle frozen
I'd close this as resolved with the recent docs added by @thrawny
Most helpful comment
This may be a good place to add an optional field in the cluster config yaml to provide either addon yamls or helm manifests that are executed upon cluster creation, like
or by extension, specify which charts should be executed on creation.
I think both are worth considering because not everyone is using helm.