We "bury the lede" here: https://github.com/kubernetes/kops/blob/master/docs/instance_groups.md
Multiple instance groups are awesome :-)
I was wondering about this. I assume a pretty common use case would be to set a small pool of on-demand or reserved instances, then a spot fleet. If Kube could prefer the spot fleet, but use the other if it can't get enough instances, it would save quite a bit of money. Can you weight IGs and setup a fallback?
No weighting of instance groups, at this point, interesting idea.
Thanks!
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Prevent issues from auto-closing with an /lifecycle frozen comment.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale
So how can we use multiple instance groups? I did stumble upon the docs that @justinsb mentioned above but I was not clear on how to use them.
The necessity we have is to have different nodes instance types.
kops create ig --help or use a manifest.
/lifecycle frozen
thanks @chrislovecnm, I've played with it and I managed to create additional nodes.
I've noticed a couple of things that kops might handle better though:
(1)
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x1e0 pc=0x31bc88f]
goroutine 1 [running]:
k8s.io/kops/pkg/apis/kops.(*InstanceGroup).AddInstanceGroupNodeLabel(...)
/private/tmp/kops-20171205-21267-11fzn4e/kops-1.8.0/src/k8s.io/kops/pkg/apis/kops/instancegroup.go:172
main.RunCreateInstanceGroup(0xc420b3f780, 0xc42032bb00, 0xc420b45f80, 0x1, 0x3, 0x57379e0, 0xc42000e018, 0xc420b34690, 0x0, 0x0)
/private/tmp/kops-20171205-21267-11fzn4e/kops-1.8.0/src/k8s.io/kops/cmd/kops/create_ig.go:160 +0x2ff
main.NewCmdCreateInstanceGroup.func1(0xc42032bb00, 0xc420b45f80, 0x1, 0x3)
/private/tmp/kops-20171205-21267-11fzn4e/kops-1.8.0/src/k8s.io/kops/cmd/kops/create_ig.go:87 +0x77
k8s.io/kops/vendor/github.com/spf13/cobra.(*Command).execute(0xc42032bb00, 0xc420b45ec0, 0x3, 0x3, 0xc42032bb00, 0xc420b45ec0)
/private/tmp/kops-20171205-21267-11fzn4e/kops-1.8.0/src/k8s.io/kops/vendor/github.com/spf13/cobra/command.go:603 +0x234
k8s.io/kops/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0x592f820, 0x3506d00, 0x0, 0x0)
/private/tmp/kops-20171205-21267-11fzn4e/kops-1.8.0/src/k8s.io/kops/vendor/github.com/spf13/cobra/command.go:689 +0x2fe
k8s.io/kops/vendor/github.com/spf13/cobra.(*Command).Execute(0x592f820, 0x5963a38, 0x0)
/private/tmp/kops-20171205-21267-11fzn4e/kops-1.8.0/src/k8s.io/kops/vendor/github.com/spf13/cobra/command.go:648 +0x2b
main.Execute()
/private/tmp/kops-20171205-21267-11fzn4e/kops-1.8.0/src/k8s.io/kops/cmd/kops/root.go:95 +0x91
main.main()
/private/tmp/kops-20171205-21267-11fzn4e/kops-1.8.0/src/k8s.io/kops/cmd/kops/main.go:25 +0x20
(2)
W0112 17:34:27.745662 86520 executor.go:109] error running task "AutoscalingGroup/testig.$clustername" (9m58s remaining to succeed): error creating AutoscalingGroup: ValidationError: Max bound, 2, must be greater than or equal to min bound, 5
apiVersion: kops/v1alpha2
status code: 400, request id: bf7ca677-f7e8-11e7-8428-fb0b1ec19d3e
I0112 17:34:27.745692 86520 executor.go:124] No progress made, sleeping before retrying 1 failed task(s)
I0112 17:34:37.749998 86520 executor.go:91] Tasks: 132 done / 133 total; 1 can run
W0112 17:34:38.189608 86520 executor.go:109] error running task "AutoscalingGroup/testig.$clustername" (9m47s remaining to succeed): error creating AutoscalingGroup: ValidationError: Max bound, 2, must be greater than or equal to min bound, 5
status code: 400, request id: c5b29d31-f7e8-11e7-b3a5-25efbdb3be2a
I0112 17:34:38.189648 86520 executor.go:124] No progress made, sleeping before retrying 1 failed task(s)
I0112 17:34:48.190326 86520 executor.go:91] Tasks: 132 done / 133 total; 1 can run
W0112 17:34:49.007471 86520 executor.go:109] error running task "AutoscalingGroup/testig.$clustername" (9m37s remaining to succeed): error creating AutoscalingGroup: ValidationError: Max bound, 2, must be greater than or equal to min bound, 5
status code: 400, request id: cc280ee5-f7e8-11e7-9486-31a2a6721f89
The panic is an open issue ;) And the min and max is something new that you found!
@chrislovecnm should I create a separate issue for that?
Pleas
So if you want to convert your node igs from one ig spanning multiple AZs to one ig per AZ you would change the existing ig to one AZ subnet only and then create additional igs each pointing to one of the subnets corresponding to an AZ?
Would be nice to have that migration path documented. Which I would be happy to do if someone could confirm.
@Globegitter so that or delete the old one ... I usually name my IGs based on AZ so that I know wth they are.
Please PR away what makes sense to you. I 鉂わ笍 documentation PRs
I think there are sufficient docs for this that make this clear
/close
@rifelpet: Closing this issue.
In response to this:
I think there are sufficient docs for this that make this clear
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Most helpful comment
I was wondering about this. I assume a pretty common use case would be to set a small pool of on-demand or reserved instances, then a spot fleet. If Kube could prefer the spot fleet, but use the other if it can't get enough instances, it would save quite a bit of money. Can you weight IGs and setup a fallback?