Kops: Add support for volumes for nodes/masters.

Created on 4 Dec 2017  路  16Comments  路  Source: kubernetes/kops

  1. Describe IN DETAIL the feature/behavior/change you would like to see.
    I would like to be able to add volumes to my master/node instances, so that I can configure separate volumes for stuff like /var/log or /var/lib/docker.
lifecyclstale

Most helpful comment

EBS as in Amazon EBS? Here's what I did to mount a separate EBS volume to each of the nodes in one of my node groups to /var/openebs. I'd assume this would also work for /var/lib/docker as well. And rather than install ansible at bootstrap, may wanna build a custom image to speed up the rolling update.

apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: 2017-06-07T20:25:23Z
  labels:
    kops.k8s.io/cluster: core-kubernetes.axial.int
  name: nodes
spec:
  additionalSecurityGroups:
  - sg-12345678
  cloudLabels:
    jenkins: "true"
  image: kope.io/k8s-1.8-debian-jessie-amd64-hvm-ebs-2017-12-02
  machineType: m4.xlarge
  maxSize: 6
  minSize: 4
  nodeLabels:
    nginx: "true"
  role: Node
  rootVolumeSize: 200
  rootVolumeType: gp2
  subnets:
  - us-east-1c

  fileAssets:
  - name: ansible-vol-provisioner
    path: /tmp/openebs.yaml
    roles: [Master,Node,Bastion]
    content: |
      ---
      - hosts: localhost
        connection: local
        vars:
          new_vol: false
          cluster: 'core'
          mount_path: "/var/openebs"
          device_name: 'sdo'
          fstype: ext4
          openebs_volume_size: "200"
        tasks:

          - ec2_vol_facts:
              region: 'us-east-1'
              filters:
                "tag:Name": "OpenEBS"
                "tag:cluster": "{{ cluster }}"
                attachment.status: detached    
            register: openebs_vols

          - set_fact: 
              new_vol: true
            when: openebs_vols.volumes | length < 1

          - debug: var=new_vol

          - ec2_metadata_facts:
            register: _facts

          - debug: var=_facts.ansible_facts.ansible_ec2_instance_id

          # Create new volume
          - ec2_vol:
              region: '{{ _facts.ansible_facts.ansible_ec2_placement_region }}'
              instance: '{{ _facts.ansible_facts.ansible_ec2_instance_id }}'
              volume_size: '{{ openebs_volume_size }}'
              device_name: '{{ device_name }}'
              tags:
                Name: OpenEBS
                cluster: "{{ cluster }}"
            when: new_vol

          # Attach existing volume
          - ec2_vol:
              region: '{{ _facts.ansible_facts.ansible_ec2_placement_region }}'
              id: '{{ openebs_vols.volumes[0].id }}'
              instance: '{{ _facts.ansible_facts.ansible_ec2_instance_id }}'
              volume_size: '{{ openebs_volume_size }}'
              device_name: '{{ device_name }}'
            when: not new_vol    


  hooks:
  - execContainer:
      command:
      - sh
      - -c
      - chroot /rootfs apt-get update && chroot /rootfs apt-get install -y open-iscsi libffi-dev libssl-dev python-dev build-essential libyaml-dev libpython2.7-dev && chroot /rootfs pip2 install -U setuptools && chroot /rootfs pip2 install -U cffi && chroot /rootfs pip2 install -U ansible && chroot /rootfs ansible-playbook /tmp/openebs.yaml -v && chroot /rootfs mkdir -p /var/openebs && chroot /rootfs chmod -R 777 /var/openebs && chroot /rootfs mkfs.ext4 /dev/xvdo && chroot /rootfs mount /dev/xvdo /var/openebs
      image: busybox 

All 16 comments

@lleszczu just as a side note, assuming that disk space is the main concern for making this request, kops has a support for disk size and type in IG. You can use that to set the disk size to 200G lets say to override the default one (20G iirc).

It's not exactly that, the scenario that I want to address is:

  • something bad happens, that uses all available disk spaces and/or inodes on /var/lib/docker
  • tools for monitoring/alerting/mitigating the issue are not affected, because they use different partition for storing theirs state

The most value I see in such separation is that whenever disk is full you are not able to ssh into machine.

This would be nice to have for security. Ideally, kops should allow some method to partition volumes as well. An attacker could potentially fill up the whole rootVolume with data and prevent the node from allowing connections or he/she can theoretically hijack applications.

This applies to every partition and volume (/tmp in this example):

Since the /tmp directory is intended to be world-writable, there is a risk of resource exhaustion if it is not bound to a separate partition. In addition, making /tmp its own file system allows an administrator to set the noexec option on the mount, making /tmp useless for an attacker to install executable code. It would also prevent an attacker from establishing a hardlink to a system _setuid_ program and wait for it to be updated. Once the program was updated, the hardlink would be broken and the attacker would have his own copy of the program. If the program happened to have a security vulnerability, the attacker could continue to exploit the known flaw.

Center for Internet Security (CIS)

@lleszczu 馃憤

I was just about to ask a similar question about how to mount a separate volume for, say, OpenEBS after doing rolling updates and then I came across Kops Hooks here https://github.com/kubernetes/kops/blob/master/docs/cluster_spec.md

I suppose you can setup an ansible playbook or just execute scripts to mount the volume and alter the docker systemd manifest to point to that volume. Haven't tested (yet) it but sounds like it should work.

@cheddarwhizzy that would work for local-instance storage only. Need support for EBS as well, which is done during launch configuration creation

EBS as in Amazon EBS? Here's what I did to mount a separate EBS volume to each of the nodes in one of my node groups to /var/openebs. I'd assume this would also work for /var/lib/docker as well. And rather than install ansible at bootstrap, may wanna build a custom image to speed up the rolling update.

apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: 2017-06-07T20:25:23Z
  labels:
    kops.k8s.io/cluster: core-kubernetes.axial.int
  name: nodes
spec:
  additionalSecurityGroups:
  - sg-12345678
  cloudLabels:
    jenkins: "true"
  image: kope.io/k8s-1.8-debian-jessie-amd64-hvm-ebs-2017-12-02
  machineType: m4.xlarge
  maxSize: 6
  minSize: 4
  nodeLabels:
    nginx: "true"
  role: Node
  rootVolumeSize: 200
  rootVolumeType: gp2
  subnets:
  - us-east-1c

  fileAssets:
  - name: ansible-vol-provisioner
    path: /tmp/openebs.yaml
    roles: [Master,Node,Bastion]
    content: |
      ---
      - hosts: localhost
        connection: local
        vars:
          new_vol: false
          cluster: 'core'
          mount_path: "/var/openebs"
          device_name: 'sdo'
          fstype: ext4
          openebs_volume_size: "200"
        tasks:

          - ec2_vol_facts:
              region: 'us-east-1'
              filters:
                "tag:Name": "OpenEBS"
                "tag:cluster": "{{ cluster }}"
                attachment.status: detached    
            register: openebs_vols

          - set_fact: 
              new_vol: true
            when: openebs_vols.volumes | length < 1

          - debug: var=new_vol

          - ec2_metadata_facts:
            register: _facts

          - debug: var=_facts.ansible_facts.ansible_ec2_instance_id

          # Create new volume
          - ec2_vol:
              region: '{{ _facts.ansible_facts.ansible_ec2_placement_region }}'
              instance: '{{ _facts.ansible_facts.ansible_ec2_instance_id }}'
              volume_size: '{{ openebs_volume_size }}'
              device_name: '{{ device_name }}'
              tags:
                Name: OpenEBS
                cluster: "{{ cluster }}"
            when: new_vol

          # Attach existing volume
          - ec2_vol:
              region: '{{ _facts.ansible_facts.ansible_ec2_placement_region }}'
              id: '{{ openebs_vols.volumes[0].id }}'
              instance: '{{ _facts.ansible_facts.ansible_ec2_instance_id }}'
              volume_size: '{{ openebs_volume_size }}'
              device_name: '{{ device_name }}'
            when: not new_vol    


  hooks:
  - execContainer:
      command:
      - sh
      - -c
      - chroot /rootfs apt-get update && chroot /rootfs apt-get install -y open-iscsi libffi-dev libssl-dev python-dev build-essential libyaml-dev libpython2.7-dev && chroot /rootfs pip2 install -U setuptools && chroot /rootfs pip2 install -U cffi && chroot /rootfs pip2 install -U ansible && chroot /rootfs ansible-playbook /tmp/openebs.yaml -v && chroot /rootfs mkdir -p /var/openebs && chroot /rootfs chmod -R 777 /var/openebs && chroot /rootfs mkfs.ext4 /dev/xvdo && chroot /rootfs mount /dev/xvdo /var/openebs
      image: busybox 

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

/remove-lifecycle rotten

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

/remove-lifecycle stale

Hopefully fixed in #6066

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

/close

@gambol99: Closing this issue.

In response to this:

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

justinsb picture justinsb  路  4Comments

rot26 picture rot26  路  5Comments

argusua picture argusua  路  5Comments

DocValerian picture DocValerian  路  4Comments

minasys picture minasys  路  3Comments