/kind bug
What steps did you take and what happened:
I was following the steps here https://kubernetes-sigs.github.io/cluster-api/provider_implementations/building_running_and_testing.html to build a new cloud provider named as genesis. The make failed in kustomize build vendor/sigs.k8s.io/cluster-api/config/default/ >> provider-components.yaml
$ make
go generate ./pkg/... ./cmd/...
go fmt ./pkg/... ./cmd/...
go vet ./pkg/... ./cmd/...
go run vendor/sigs.k8s.io/controller-tools/cmd/controller-gen/main.go crd
CRD files generated, files can be found under path /gowork/src/sigs.k8s.io/cluster-api-provider-genesis/config/crds.
kustomize build config/default/ > provider-components.yaml
Error: rawResources failed to read Resources: Load from path ../rbac/rbac_role.yaml failed: security; file '../rbac/rbac_role.yaml' is not in or below '/gowork/src/sigs.k8s.io/cluster-api-provider-genesis/config/default'
Makefile:29: recipe for target 'manifests' failed
make: *** [manifests] Error 1
kustomize version is latest release.
$ kustomize version
Version: {KustomizeVersion:2.0.2 GitCommit:b67179e951ebe11d00125bdf3c2670e88dca8817 BuildDate:2019-02-25T16:53:32Z GoOs:linux GoArch:amd64}
The most recent kustomize version works for this case is v1.0.11 after tried several latest release.
$ kustomize version
Version: {KustomizeVersion:1.0.11 GitCommit:8f701a00417a812558a7b785e8354957afa469ae BuildDate:2018-12-04T18:42:24Z GoOs:unknown GoArch:unknown}
$ make
go generate ./pkg/... ./cmd/...
go fmt ./pkg/... ./cmd/...
go vet ./pkg/... ./cmd/...
go run vendor/sigs.k8s.io/controller-tools/cmd/controller-gen/main.go crd
CRD files generated, files can be found under path /gowork/src/sigs.k8s.io/cluster-api-provider-genesis/config/crds.
kustomize build config/default/ > provider-components.yaml
2019/02/27 03:33:14 Adding nameprefix and namesuffix to Namespace resource will be deprecated in next release.
echo "---" >> provider-components.yaml
kustomize build vendor/sigs.k8s.io/cluster-api/config/default/ >> provider-components.yaml
2019/02/27 03:33:14 Adding nameprefix and namesuffix to Namespace resource will be deprecated in next release.
go test ./pkg/... ./cmd/... -coverprofile cover.out
? sigs.k8s.io/cluster-api-provider-genesis/pkg/apis [no test files]
? sigs.k8s.io/cluster-api-provider-genesis/pkg/apis/genesis [no test files]
ok sigs.k8s.io/cluster-api-provider-genesis/pkg/apis/genesis/v1alpha1 9.064s coverage: 23.8% of statements
? sigs.k8s.io/cluster-api-provider-genesis/pkg/cloud/genesis/actuators/cluster [no test files]
? sigs.k8s.io/cluster-api-provider-genesis/pkg/cloud/genesis/actuators/machine [no test files]
? sigs.k8s.io/cluster-api-provider-genesis/pkg/controller [no test files]
? sigs.k8s.io/cluster-api-provider-genesis/pkg/webhook [no test files]
? sigs.k8s.io/cluster-api-provider-genesis/cmd/manager [no test files]
go build -o bin/manager sigs.k8s.io/cluster-api-provider-genesis/cmd/manager
What did you expect to happen:
passed make with latest kustomize
Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
Environment:
kubectl version):/etc/os-release):it should be done without default
kustomize build vendor/sigs.k8s.io/cluster-api/config/
@greut
It seems not true in my environment.
$ kustomize version
Version: {KustomizeVersion:2.0.2 GitCommit:b67179e951ebe11d00125bdf3c2670e88dca8817 BuildDate:2019-02-25T16:53:32Z GoOs:linux GoArch:amd64}
$ kustomize build vendor/sigs.k8s.io/cluster-api/config/
Error: unable to find one of 'kustomization.yaml', 'kustomization.yml' or 'Kustomization' in directory '/gowork/src/sigs.k8s.io/cluster-api-provider-genesis/vendor/sigs.k8s.io/cluster-api/config'
Anything need to be checked?
@xunpan @greut With kustomize changes, all resources used in kustomization.yaml should be put into its sub-folder
/assign @figo
/priority important-soon
/milestone v1alpha1
@xunpan to clarify, this is not a problem with cluster-api main,
if you look at the error log:
kustomize build config/default/ > provider-components.yaml
Error: rawResources failed to read Resources:
Load from path ../rbac/rbac_role.yaml failed:
security; file '../rbac/rbac_role.yaml' is not in
or below '/gowork/src/sigs.k8s.io/cluster-api-provider-genesis/config/default'
you can modify your /gowork/src/sigs.k8s.io/cluster-api-provider-genesis/config/default/kustomize.yaml to use bases instead of resources
please refer to https://github.com/kubernetes-sigs/cluster-api/blob/master/config/default/kustomization.yaml
we are depends on kubebuilder, kubebuilder need be updated to generate kustomize.yaml that support kustomize 2.0, there is open issue for this, https://github.com/kubernetes-sigs/kubebuilder/issues/595
This might be something that we want to address in the gitbook documentation until the kubebuilder fix lands.
/cc @davidewatson
the kubebuilder fix is under review, https://github.com/kubernetes-sigs/kubebuilder/pull/614
a open question, do we support kubebuilder version other than the latest one?
https://github.com/kubernetes-sigs/cluster-api/pull/782 documents the minimal version of kubebuilder as 1.0.5. Not all versions may be tested, but since this is for developement maybe that is okay.
@figo I don't see an issue with requiring a min version of kubebuilder, even if it is the latest release as long as we document it.
What is remaining before we can close this issue?
The kubebuilder fix has now been merged, remaining items for this ticket should be to document minimum required kubebuilder version once a new release is cut.
/assign
/kind documentation
This is also still waiting on a new release of kubebuilder, so bumping to Next
/milestone Next
/area ux
/cc @asauber
This is caused by a security update to kustomize to prevent directory traversal. Fix boils down to not using ./..
I thought we fixed this by switching to bases instead of resources.
@asauber I was able to run CAPI with 2.0.3
It been fixed, the remaining is to document the minimum kubebuilder version
I have a patch for the book. Putting it up when I get home.
WIP https://github.com/kubernetes-sigs/cluster-api/pull/1059 Testing and updating the instructions.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
After the release of v1alpha2 we adjusted the required kustomize version to be v3.1+
/close
@vincepri: Closing this issue.
In response to this:
After the release of v1alpha2 we adjusted the required kustomize version to be v3.1+
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Most helpful comment
This might be something that we want to address in the gitbook documentation until the kubebuilder fix lands.
/cc @davidewatson