1. What kops version are you running? The command kops version, will display
this information.
1.11.0
2. What Kubernetes version are you running? kubectl version will print the
version if a cluster is running or provide the Kubernetes version specified as
a kops flag.
N/A
3. What cloud provider are you using?
AWS
4. What commands did you run? What is the simplest way to reproduce this issue?
kops create cluster --state 'file:///tmp/kops-state' --zones us-east-1a foo.cluster.k8s.local
5. What happened after the commands executed?
State Store: Invalid value: "file:///tmp/kops-state": Unable to read state store.
Please use a valid state store when setting --state or KOPS_STATE_STORE env var.
For example, a valid value follows the format s3://<bucket>.
Trailing slash will be trimmed.
Issue is caused by vfs.IsClusterReadable returning false for all filesystem state storage as called by factory.Clientset
6. What did you expect to happen?
Not to get an error about being unable to read the state store.
7. Please provide your cluster manifest. Execute
kops get --name my.example.com -o yaml to display your cluster manifest.
You may want to remove your cluster name and other sensitive information.
Irrelevant
8. Please run the commands with most verbose logging by adding the -v 10 flag.
Paste the logs into this report, or in a gist and provide the gist link here.
I0213 14:25:54.779621 34407 factory.go:68] state store file:///tmp/kops-state
State Store: Invalid value: "file:///tmp/kops-state": Unable to read state store.
Please use a valid state store when setting --state or KOPS_STATE_STORE env var.
For example, a valid value follows the format s3://<bucket>.
Trailing slash will be trimmed.
9. Anything else do we need to know?
The documentation specifically claims that local filesystem state storage is supported with the file:// protocol.
The bug was triggered when attempting to build a CI workflow around:
.spec.configBase to refer to the local directorykops replace -f <candidate.yaml> --state file:///path/to/local/state/dirkops update cluster <cluster_name> --state file:///path/to/local/state/dirIn a CI workflow, this would allow us to preview (for example, during a pull request) how a kops update cluster would run, without touching the actual remote S3 state bucket (from running kops replace).
Found the PR that introduced the documentation that claimed that filesystem state stores were supported. Before that, code and documentation were in sync about filesystem state stores being unsupported.
Are filesystem state stores supported or not?
I've hit the same issue w/ kops version Version 1.11.1 (git-0f2aa8d30)
Also running into this issue, whilst trying to wrap kops as part of a larger deployment tool.
Tested with version 1.14.0-alpha.1 (git-ab68aa2d2). It doesn't work there either.
Likewise. I got bit by this... unfortunately.
Reproduced in 1.12.1 (git-e1c317f9c)
So, kops state local doesnt work ? I get the same feeling, but not sure, b/c i dont see any instructions on how to use it (i.e. should i make a directory or will it mkdir -p for me..)
I tried this..
➜ kops create cluster --name=jay
State Store: Invalid value: "file:///tmp/kops": Unable to read state store.
Please use a valid state store when setting --state or KOPS_STATE_STORE env var.
For example, a valid value follows the format s3://<bucket>.
Trailing slash will be trimmed.
➜ cat ~/.kops/config
kops_state_store: file:///tmp/kops
Issue still present in 1.12.2
`[root@kube1 tmp]# kops version
Version 1.12.2 (git-3fc9bc486)
[root@kube1 tmp]# kops upgrade cluster localhost --state=file:///tmp/test
State Store: Invalid value: "file:///tmp/test": Unable to read state store.
Please use a valid state store when setting --state or KOPS_STATE_STORE env var.
For example, a valid value follows the format s3://
Trailing slash will be trimmed.`
The docs https://github.com/kubernetes/kops/blob/master/docs/state.md list many supported state stores but the source code looks like SSH, local FS and MemFS are explicitly not allowed
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
Same in kops 1.14.0
Same in kops 1.15.0
Same in kops 1.16.0
This feature is due for 1.17
its been a year guys - can someone fix this please! thanks
This feature is due for 1.17
This feature is due for 1.17
And can be tested in the 1.17.0-beta.2 release.
https://github.com/kubernetes/kops/releases/tag/v1.17.0-beta.2
Most helpful comment
Also running into this issue, whilst trying to wrap kops as part of a larger deployment tool.