Kops: How to use kops addons

Created on 6 Oct 2017  路  16Comments  路  Source: kubernetes/kops

Can't seem to find the documentation on how to use kops' addons (found in kops/addons) ; more specifically logging-elasticsearch.

It seems I could just apply the YML config via kubectl but what is that "channel" tool mentioned here?

Thanks!

areaddon-manager aredocumentation lifecyclfrozen

Most helpful comment

@ruippeixotog
I cover the channels tool in my kops workshop

if you trust me, you can see how I get the channels binary for that workshop... (custom compiled and pushed to github pages)

What's missing is the instructions on using the AddonsSpec field in kops manifest, but it's very simple

here's an example

spec:
  ...
  addons:
  - manifest: s3://my-bucket/custom-channel.yaml
  - manifest: kubernetes-dashboard
  - manifest: monitoring-standalone

This will bootstrap 2 upstream addons:

and a custom one I pushed to a private s3 bucket (using terraform)

my protokube logs on the masters look as follows

channels.go:31] checking channel: "s3://my-bucket/custom-channel.yaml"
channels.go:45] Running command: channels apply channel s3://my-bucket/custom-channel.yaml --v=4 --yes
channels.go:34] apply channel output was: I0510 12:13:46.001608    3941 addons.go:38] Loading addons channel from "s3://my-bucket/custom-channel.yaml"
s3context.go:172] Found bucket "my-bucket" in region "my-region"
s3fs.go:210] Reading file "s3://my-bucket/custom-channel.yaml"
  No update required
channels.go:31] checking channel: "kubernetes-dashboard"
channels.go:45] Running command: channels apply channel kubernetes-dashboard --v=4 --yes
channels.go:34] apply channel output was: I0510 12:13:47.133461    3946 addons.go:38] Loading addons channel from "https://raw.githubusercontent.com/kubernetes/kops/master/addons/kubernetes-dashbo
context.go:159] Performing HTTP request: GET https://raw.githubusercontent.com/kubernetes/kops/master/addons/kubernetes-dashboard/addon.yaml
  NAME                        CURRENT        UPDATE
  kubernetes-dashboard        -        1.8.3
addon.go:130] Applying update from "https://raw.githubusercontent.com/kubernetes/kops/master/addons/kubernetes-dashboard/v1.8.3.yaml"
context.go:159] Performing HTTP request: GET https://raw.githubusercontent.com/kubernetes/kops/master/addons/kubernetes-dashboard/v1.8.3.yaml
apply.go:67] Running command: kubectl apply -f /tmp/channel536404810/manifest.yaml
channel_version.go:176] sending patch: "{\"metadata\":{\"annotations\":{\"addons.k8s.io/kubernetes-dashboard\":\"{\\\"version\\\":\\\"1.8.3\\\",\\\"channel\\\":\\\"kubernetes-dashboard\\\"}\"}}}"
  Updated "kubernetes-dashboard" to 1.8.3
channels.go:31] checking channel: "monitoring-standalone"
channels.go:45] Running command: channels apply channel monitoring-standalone --v=4 --yes
channels.go:34] apply channel output was: I0510 12:13:48.487837    3973 addons.go:38] Loading addons channel from "https://raw.githubusercontent.com/kubernetes/kops/master/addons/monitoring-standa
context.go:159] Performing HTTP request: GET https://raw.githubusercontent.com/kubernetes/kops/master/addons/monitoring-standalone/addon.yaml
  NAME                        CURRENT        UPDATE
  monitoring-standalone        -        1.6.0
addon.go:130] Applying update from "https://raw.githubusercontent.com/kubernetes/kops/master/addons/monitoring-standalone/v1.6.0.yaml"
context.go:159] Performing HTTP request: GET https://raw.githubusercontent.com/kubernetes/kops/master/addons/monitoring-standalone/v1.6.0.yaml
apply.go:67] Running command: kubectl apply -f /tmp/channel001882116/manifest.yaml
channel_version.go:176] sending patch: "{\"metadata\":{\"annotations\":{\"addons.k8s.io/monitoring-standalone\":\"{\\\"version\\\":\\\"1.6.0\\\",\\\"channel\\\":\\\"monitoring-standalone\\\"}\"}}}
  Updated "monitoring-standalone" to 1.6.0

the best part is that the masters keep polling the channel for updates and automatically update anything I push to s3.

I use it mainly to bootstrap rbac and helm

All 16 comments

Great question!

Channels is one of the areas of little documentation unfortunately. One of the components that we use internally for kops, but can be used externally as well.

Internal details

Inside a kops state store cluster s3 bucket you will see a addons folder. That folder is automatically generated by kops for such components as kube-dns, dns-controller, cni, etc. Those manifest files are used by channels which is run inside the protokube container on the master.

Nodeup bootstraps a master, and installs protokube. Once k8s is up, protokube exectutes channels. Channels accesses the yaml staged in s3 and applys the different manifests with kubectl.

At this point we do not recommend using the addons files in the s3 bucket because kops manages them, and we do not have capability to modify how kops handles them. There has been discussions about having addons manifests as well. But that is future work.

External use

You can use the channels binary just like protokube uses it. I do not have the cli options handy, but we do have users that utilize it.

Why would I use it? channels allows for the upgrading of specific components, and targeting components for certain versions of k8s. It is a very simple tool, and I would recommend a more robust cicd tool for complex component. But when you need to bootstrap basic applications it works well.

We should drop all of this into docs, and I am sure I have missed a couple of details.

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

Since this seems to be the only spot to find the above useful piece of information, please tag this with something that will prevent it from disappearing into the abyss of auto-closed issues.

It would be appreciated by pretty much anyone trying to read the kops docs if docs/addon_manager.md be rewritten by someone who understands what this document is trying to communicate. Otherwise, please move it to the work-in-progress directory.

Docs are going to vary in quality; however, this file seems less like documentation and more like a skillful transcription of one-side of a conversation between two people who are already familiar with the subject. (and the eaves-dropping transcriptionist arrived after the conversation had already been going for 10 minutes :)

/lifecycle frozen
/remove-lifecycle stale

@gladiatr72 you are actually able to use the above bot commands ;)

/area documentation

/lifecycle frozen
/remove-lifecycle rotten

Hi! I'm still a beginner with Kubernetes and Kops, and I have three questions after reading this:

  • I see in many places the "channel tool" mentioned. Where can I download that tool?
  • I know that I can use kubectl to create an addon at a specific version, but is there any way I can just apply a YAML with an Addons resource like this? Everything in my cluster creation must be automatable, and it's not very easy if I have to parse and interpret this YAML to do that. Is this what the "channels tool" does?
  • Does that "channels tool" allow me to make an addon run as the cluster boots, just like the internal ones?

@ruippeixotog
I cover the channels tool in my kops workshop

if you trust me, you can see how I get the channels binary for that workshop... (custom compiled and pushed to github pages)

What's missing is the instructions on using the AddonsSpec field in kops manifest, but it's very simple

here's an example

spec:
  ...
  addons:
  - manifest: s3://my-bucket/custom-channel.yaml
  - manifest: kubernetes-dashboard
  - manifest: monitoring-standalone

This will bootstrap 2 upstream addons:

and a custom one I pushed to a private s3 bucket (using terraform)

my protokube logs on the masters look as follows

channels.go:31] checking channel: "s3://my-bucket/custom-channel.yaml"
channels.go:45] Running command: channels apply channel s3://my-bucket/custom-channel.yaml --v=4 --yes
channels.go:34] apply channel output was: I0510 12:13:46.001608    3941 addons.go:38] Loading addons channel from "s3://my-bucket/custom-channel.yaml"
s3context.go:172] Found bucket "my-bucket" in region "my-region"
s3fs.go:210] Reading file "s3://my-bucket/custom-channel.yaml"
  No update required
channels.go:31] checking channel: "kubernetes-dashboard"
channels.go:45] Running command: channels apply channel kubernetes-dashboard --v=4 --yes
channels.go:34] apply channel output was: I0510 12:13:47.133461    3946 addons.go:38] Loading addons channel from "https://raw.githubusercontent.com/kubernetes/kops/master/addons/kubernetes-dashbo
context.go:159] Performing HTTP request: GET https://raw.githubusercontent.com/kubernetes/kops/master/addons/kubernetes-dashboard/addon.yaml
  NAME                        CURRENT        UPDATE
  kubernetes-dashboard        -        1.8.3
addon.go:130] Applying update from "https://raw.githubusercontent.com/kubernetes/kops/master/addons/kubernetes-dashboard/v1.8.3.yaml"
context.go:159] Performing HTTP request: GET https://raw.githubusercontent.com/kubernetes/kops/master/addons/kubernetes-dashboard/v1.8.3.yaml
apply.go:67] Running command: kubectl apply -f /tmp/channel536404810/manifest.yaml
channel_version.go:176] sending patch: "{\"metadata\":{\"annotations\":{\"addons.k8s.io/kubernetes-dashboard\":\"{\\\"version\\\":\\\"1.8.3\\\",\\\"channel\\\":\\\"kubernetes-dashboard\\\"}\"}}}"
  Updated "kubernetes-dashboard" to 1.8.3
channels.go:31] checking channel: "monitoring-standalone"
channels.go:45] Running command: channels apply channel monitoring-standalone --v=4 --yes
channels.go:34] apply channel output was: I0510 12:13:48.487837    3973 addons.go:38] Loading addons channel from "https://raw.githubusercontent.com/kubernetes/kops/master/addons/monitoring-standa
context.go:159] Performing HTTP request: GET https://raw.githubusercontent.com/kubernetes/kops/master/addons/monitoring-standalone/addon.yaml
  NAME                        CURRENT        UPDATE
  monitoring-standalone        -        1.6.0
addon.go:130] Applying update from "https://raw.githubusercontent.com/kubernetes/kops/master/addons/monitoring-standalone/v1.6.0.yaml"
context.go:159] Performing HTTP request: GET https://raw.githubusercontent.com/kubernetes/kops/master/addons/monitoring-standalone/v1.6.0.yaml
apply.go:67] Running command: kubectl apply -f /tmp/channel001882116/manifest.yaml
channel_version.go:176] sending patch: "{\"metadata\":{\"annotations\":{\"addons.k8s.io/monitoring-standalone\":\"{\\\"version\\\":\\\"1.6.0\\\",\\\"channel\\\":\\\"monitoring-standalone\\\"}\"}}}
  Updated "monitoring-standalone" to 1.6.0

the best part is that the masters keep polling the channel for updates and automatically update anything I push to s3.

I use it mainly to bootstrap rbac and helm

@so0k thanks for the great explanation! I ended up declaring the addons in my Cluster YAML file In order to automate the deployment of the addons. I found out the AddonsSpec schema before you posted, but I didn't know I could declare the "standard" addons by name and using S3 links, which will surely be helpful 馃憤

Now that HELM is an official incubator project why we don't rely on Helm charts for the addons? I think that we could rely to the upstream helm chart managment for components like dashboard etc..

Please don't _require_ helm - I and other's don't like the fact that Helm is either a resource hog (needing to run a tiller in each namespace) or a security hole (effectively gives sudo to everything running in the cluster).

Ah cool - that would remove most/all of my issues.

I'd close this as resolved with the recent docs added by @thrawny

I am currently unable to use the addon system and am not sure what I am doing wrong. In the documentation, there is reference to the idea that you can apply addons at creation time using the YAML file method. The documentation also, very vaguely, mentions how you can reference addons. From reading through this thread I've discovered that the snippet provided is intended to go with your Cluster definition.

Once I'd discovered that, I tried modifying my Cluster definition to install the k8s dashboard and ran a create. However, there is no reference to the dashboard being installed in the create / update logs, nor is there any reference in the addon directory of the state store. I'm not sure what I'm doing wrong at this point, I was just hoping to install the k8s dashboard at cluster creation time.

apiVersion: kops.k8s.io/v1alpha2
kind: Cluster
metadata:
  name: test.k8s.local
spec:
  ...
+ addons:
+   - manifest: kubernetes-dashboard

For those who want to use kops addon, this is how I launch addons when creating cluster:

Declare addons in Cluster.spec. For example:

kind: Cluster
metadata:
  name: k8s-cluster.example.com
spec:
  addons:
  - manifest: s3://mybucket/addons/k8s-cluster.example.com/metrics-server/addon.yaml #url of a custom addon that I put to s3, this allow protokube to fetch the manifest and apply with `channels apply` command

In the s3 bucket, there are 2 files:
addon.yaml declare addon:

kind: Addons
metadata:
  name: metrics-server
spec:
  addons:
  - version: 0.3.6
    selector:
      k8s-addon: metrics-server.addons.k8s.io
    manifest: v0.3.6.yaml

and v0.3.6.yaml is your main deployment manifest:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: system:aggregated-metrics-reader
  labels:
    rbac.authorization.k8s.io/aggregate-to-view: "true"
    rbac.authorization.k8s.io/aggregate-to-edit: "true"
    rbac.authorization.k8s.io/aggregate-to-admin: "true"
rules:
- apiGroups: ["metrics.k8s.io"]
  resources: ["pods", "nodes"]
  verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: metrics-server:system:auth-delegator
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: metrics-server-auth-reader
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: apiregistration.k8s.io/v1beta1
kind: APIService
metadata:
  name: v1beta1.metrics.k8s.io
spec:
  service:
    name: metrics-server
    namespace: kube-system
  group: metrics.k8s.io
  version: v1beta1
  insecureSkipTLSVerify: true
  groupPriorityMinimum: 100
  versionPriority: 100
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: metrics-server
  namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: metrics-server
  namespace: kube-system
  labels:
    k8s-app: metrics-server
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  template:
    metadata:
      name: metrics-server
      labels:
        k8s-app: metrics-server
    spec:
      serviceAccountName: metrics-server
      volumes:
      # mount in tmp so we can safely use from-scratch images and/or read-only containers
      - name: tmp-dir
        emptyDir: {}
      containers:
      - name: metrics-server
        image: k8s.gcr.io/metrics-server-amd64:v0.3.6
        imagePullPolicy: IfNotPresent
        args:
          - --cert-dir=/tmp
          - --secure-port=4443
          - --kubelet-insecure-tls
        ports:
        - name: main-port
          containerPort: 4443
          protocol: TCP
        securityContext:
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
        volumeMounts:
        - name: tmp-dir
          mountPath: /tmp
      nodeSelector:
        kubernetes.io/os: linux
        kubernetes.io/arch: "amd64"
---
apiVersion: v1
kind: Service
metadata:
  name: metrics-server
  namespace: kube-system
  labels:
    kubernetes.io/name: "Metrics-server"
    kubernetes.io/cluster-service: "true"
spec:
  selector:
    k8s-app: metrics-server
  ports:
  - port: 443
    protocol: TCP
    targetPort: main-port
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: system:metrics-server
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - nodes
  - nodes/stats
  - namespaces
  - configmaps
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:metrics-server
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:metrics-server
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system

then, when you run kops create cluster --yes, the cluster will be created with your addon installation

Was this page helpful?
0 / 5 - 0 ratings

Related issues

drewfisher314 picture drewfisher314  路  4Comments

yetanotherchris picture yetanotherchris  路  3Comments

chrislovecnm picture chrislovecnm  路  3Comments

DocValerian picture DocValerian  路  4Comments

chrislovecnm picture chrislovecnm  路  3Comments