Kops: Configuration file instead of command line switches

Created on 5 Aug 2016  路  17Comments  路  Source: kubernetes/kops

Any thoughts on driving this off of a yaml file, rather than switches?

aredocumentation

Most helpful comment

So this feature is already coded into kops https://github.com/kubernetes/kops/blob/master/cmd/kops/create.go#L44

I think we have a little more effort in documenting it, and providing an example YAML file with all possible configurations..

I can try to get one out soon, if nobody else wants to take it.

All 17 comments

We do, I think!

Just like with k8s, there is an underlying spec for the cluster and for instance groups. e.g. https://github.com/kubernetes/kops/blob/master/upup/pkg/api/cluster.go

When you kops edit cluster or kops edit ig you are editing the actual spec.

The CLI switches act as shortcuts to make it easier to create your seed config.

Or are you saying we should allow you to just specify a yaml file, like kops create -f <clusterspec> That would also be good - and not terribly hard - if so!

The second case, kops create -f my_cluster.yaml

Also, would be good to support something like kubectl replace or kubectl apply. A use case that came up in sig-aws was changing the default subnet CIDRs.

We need this documented!!!

I'm still confused what we want/need here... Assigning to you @chrislovecnm - please send a PR with docs/use cases

Hi, just wanted to note this issue is very important in my usecase. I need to automate cluster creation with custom options which are not editable by command line switches (spot price, cidrs). Is there currently any way to script whole process of cluster creation? Right now I am blocked because the only way to change spot price is by editing it in editor.

So this feature is already coded into kops https://github.com/kubernetes/kops/blob/master/cmd/kops/create.go#L44

I think we have a little more effort in documenting it, and providing an example YAML file with all possible configurations..

I can try to get one out soon, if nobody else wants to take it.

Just reiterating what I said in the channel, I use this currently, my main wish is that I could follow the same pattern with kops update cluster -f, currently it's required to duplicate the changes in the yaml files we store in git, and in kops edit cluster|ig

Any update on this? Creation by file/spec would be highly appreciated for headless/automated deployment of Kubernetes clusters through kops. It was written above that the code is already present, but it's not yet released?

Hey @vlerenc

So as it stands today on 1.5.3 a user can certainly use kops create -f $CONFIG.

A baseline config can be found by using a basic kops edit cluster command as in:

kops edit cluster $NAME

My example for private topology looks like:

# Please edit the object below. Lines beginning with a '#' will be ignored,                                                                                                                     
# and an empty file will abort the edit. If an error occurs while saving this file will be                                                                                                      
# reopened with the relevant failures.                                                                                                                                                          
#                                                                                                                                                                                               
apiVersion: kops/v1alpha2                                                                                                                                                                       
kind: Cluster                                                                                                                                                                                   
metadata:                                                                                                                                                                                       
  creationTimestamp: "2017-03-19T13:34:45Z"                                                                                                                                                     
  name: demo.nivenly.com                                                                                                                                                                        
spec:                                                                                                                                                                                           
  api:                                                                                                                                                                                          
    loadBalancer:                                                                                                                                                                               
      type: Public                                                                                                                                                                              
  channel: stable                                                                                                                                                                               
  cloudProvider: aws                                                                                                                                                                            
  configBase: s3://nivenly-state-store/demo.nivenly.com                                                                                                                                         
  etcdClusters:                                                                                                                                                                                 
  - etcdMembers:                                                                                                                                                                                
    - instanceGroup: master-us-west-1a                                                                                                                                                          
      name: a                                                                                                                                                                                   
    name: main                                                                                                                                                                                  
  - etcdMembers:                                                                                                                                                                                
    - instanceGroup: master-us-west-1a                                                                                                                                                          
      name: a                                                                                                                                                                                   
    name: events                                                                                                                                                                                
  kubernetesApiAccess:                                                                                                                                                                          
  - 0.0.0.0/0                                                                                                                                                                                   
  kubernetesVersion: 1.5.2                                                                                                                                                                      
  masterInternalName: api.internal.demo.nivenly.com                                                                                                                                             
  masterPublicName: api.demo.nivenly.com                                                                                                                                                        
  networkCIDR: 172.20.0.0/16                                                                                                                                                                    
  networking:                                                                                                                                                                                   
    weave: {}                                                                                                                                                                                   
  nonMasqueradeCIDR: 100.64.0.0/10                                                                                                                                                              
  sshAccess:                                                                                                                                                                                    
  - 0.0.0.0/0                                                                                                                                                                                   
  subnets:                                                                                                                                                                                      
  - cidr: 172.20.32.0/19                                                                                                                                                                        
    name: us-west-1a                                                                                                                                                                            
    type: Private                                                                                                                                                                               
    zone: us-west-1a                                                                                                                                                                            
  - cidr: 172.20.0.0/22                                                                                                                                                                         
    name: utility-us-west-1a                                                                                                                                                                    
    type: Utility                                                                                                                                                                               
    zone: us-west-1a                                                                                                                                                                            
  topology:
    bastion:                                                                                                                                                                                    
      bastionPublicName: bastion.demo.nivenly.com                                                                                                                                               
    dns:                                                                                                                                                                                        
      type: Public                                                                                                                                                                              
    masters: private                                                                                                                                                                            
    nodes: private             

kops update -f

Thanks for the suggestion @blakebarnett!

Now this frankly is a great idea. I can't believe we don't have this yet. I will push for this in the next release(s) and see if I can't find a volunteer to help code it. I know @geojaz is working on publicizing kops create -f in general, so maybe he would be interested in kops update -f as well? 馃槈

It will be tricky managing the deltas but thanks to the way kops handles it's internal models this feature should fit in nicely.

@justinsb what are your thoughts on this as far as implementation goes? This seems like low hanging fruit but want to bounce it off you before I get too carried away here.

I think we should close this issue as kops create -f is now in place, and open up an issue in favor of kops update -f.

Ah, great, it works like a charm. Thank you!

Also, you can use kops replace -f in place of kops upgrade -f, there may be some more validation that could be done with something like kops upgrade -f but replace is working well for us.

Thanks, yes, that's what I was using so far (after a somewhat convoluted create step), but now create and replace both work from the same generated source, so that's really wonderful.

It's still unclear to me how can I get the initial yml template.
@kris-nova mentioned it can be obtained from kops edit cluster $NAME , but that requires an s3://<store>/<name>/config to be present. In other words to get a template for kops create -f I need to issue kops create <flags>, then get a template with kops edit, and only then I can edit a template and use kops create -f. Is that correct?

It would be nice to have a equivalent kubectl apply -f .. that way you could drive you CI from a cluster and instancegroup spec file ... The problem with kops replace is it won't create new resources if not there or delete when removed ...

@max-lobur that's correct. Kops does some subnet calculations and generates a fair amount of boilerplate. An initial create command with the "close-enough" settings that you then use kops get cluster <name> -o yaml > cluster_name.yaml on is a pretty smooth process considering everything it's doing. I think there's some work to make this a one-step process in flight also here: https://github.com/kubernetes/kops/pull/2954

@gambol99 I think those features are where they're headed with the kops server/controller features. Not sure when they'll land.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

drewfisher314 picture drewfisher314  路  4Comments

georgebuckerfield picture georgebuckerfield  路  4Comments

joshbranham picture joshbranham  路  3Comments

yetanotherchris picture yetanotherchris  路  3Comments

chrislovecnm picture chrislovecnm  路  3Comments