Minikube: Not all parameters available in "minikube start" have a corresponding setting in "minikube config", and for those that do, some don't seem to work

Created on 25 Oct 2019  路  9Comments  路  Source: kubernetes/minikube

The problem in this issue was brought up by me in https://github.com/kubernetes/minikube/issues/5706
but as requested I am opening a new issue for that.
When trying to replace a "minikube start" command with a lot of params with a simple "minikube start" (i.e., without params) plus setting "minikube config" accordingly, I noticed that not all parameters available in "minikube start" have corresponding settings in "minikube config", e.g.
--extra-config
--apiserver-{name|names|port|ips} (Not that I stictly needed those for my use case, but someone might...)

Further, some settings that do exist in "minikube config" don't seem to work, e.g.,

$ minikube config set memory 12G
*
X Set failed: [memory:strconv.Atoi: parsing "12G": invalid syntax]

while

--memory 12g

worked fine.

Same problem with

$ minikube config set insecure-registry myinsecureregistry:12345
*
X Set failed: [Cannot enable/disable invalid addon insecure-registry]

As before, the corresponding "minikube start" switch

minikube config set insecure-registry myinsecureregistry:12345

works.

Maybe I am using "minikube config" incorrectly, but other settings (cpus, disk-size) worked when used that way, e.g.

$ minikube config set disk-size 30G
! These changes will take effect upon a minikube delete and then a minikube start

$ minikube config set cpus 4
! These changes will take effect upon a minikube delete and then a minikube start

help wanted kinfeature lifecyclstale prioritimportant-longterm

Most helpful comment

Should this issue be labeled as an enhancement request instead of bug so it doesn't go closed without a fix?

All 9 comments

You pointed out a shortcoming of minikube ! and I agree we need to fix it !

we could either make sure any start args automaticly get saved to config or we update the config items that should be listed.

either way I would be happy to review a PR that fixes this.

/assign @nanikjava

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Should this issue be labeled as an enhancement request instead of bug so it doesn't go closed without a fix?

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

/remove-lifecycle rotten

we still need to make this happen. I agree @PabloCamino this is a feature request.

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@medyagh is there already a corresponding feature request ticket to track this?

Was this page helpful?
0 / 5 - 0 ratings