What steps did you take and what happened:
[A clear and concise description on how to REPRODUCE the bug.]
I apologise if I've missed something obvious, still fairly new to all this.
I followed the quick start guide and this guide.
1. git clone https://github.com/kubernetes-sigs/cluster-api.git
2. cd cluster-api/
3. make clusterctl
4. cat > clusterctl-settings.json <<EOF
{
"providers": ["cluster-api","bootstrap-kubeadm","control-plane-kubeadm", "infrastructure-docker"],
"provider_repos": []
}
EOF
5. make -C test/infrastructure/docker docker-build REGISTRY=gcr.io/k8s-staging-capi-docker
6. make -C test/infrastructure/docker generate-manifests REGISTRY=gcr.io/k8s-staging-capi-docker
7. ./cmd/clusterctl/hack/local-overrides.py
8. cat > ~/.cluster-api/clusterctl.yaml <<EOF
providers:
- name: docker
url: $HOME/.cluster-api/overrides/infrastructure-docker/latest/infrastructure-components.yaml
type: InfrastructureProvider
EOF
9. cat > kind-cluster-with-extramounts.yaml <<EOF
kind: Cluster
apiVersion: kind.sigs.k8s.io/v1alpha3
nodes:
- role: control-plane
extraMounts:
- hostPath: /var/run/docker.sock
containerPath: /var/run/docker.sock
EOF
10. kind create cluster --config ./kind-cluster-with-extramounts.yaml --name clusterapi
11. kind load docker-image gcr.io/k8s-staging-capi-docker/capd-manager-amd64:dev --name clusterapi
12. clusterctl init --core cluster-api:v0.3.0 --bootstrap kubeadm:v0.3.0 --control-plane kubeadm:v0.3.0 --infrastructure docker:v0.3.0
Everything worked as excepted up until point 12. This was the output:
Fetching providers
Using Override="core-components.yaml" Provider="cluster-api" Version="v0.3.0"
Error: failed to get provider components for the "cluster-api:v0.3.0" provider: failed to detect default target namespace: Invalid manifest. There should be no more than one resource with Kind Namespace in the provider components yaml
What did you expect to happen:
Expected something similar to this:
Fetching providers
Installing cert-manager
Waiting for cert-manager to be available...
Installing Provider="cluster-api" Version="v0.3.0" TargetNamespace="capi-system"
Installing Provider="bootstrap-kubeadm" Version="v0.3.0" TargetNamespace="capi-kubeadm-bootstrap-system"
Installing Provider="control-plane-kubeadm" Version="v0.3.0" TargetNamespace="capi-kubeadm-control-plane-system"
Installing Provider="infrastructure-docker" Version="v0.3.0" TargetNamespace="capd-system"
Your management cluster has been initialized successfully!
You can now create your first workload cluster by running the following:
clusterctl config cluster [name] --kubernetes-version [version] | kubectl apply -f -
Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
Also used Kind v0.8.1 but had the same problem.
Environment:
kubectl version): v1.18.6/etc/os-release): Ubuntu 20.04.1 & macOS v10.15.5/kind bug
[One or more /area label. See https://github.com/kubernetes-sigs/cluster-api/labels?q=area for the list of labels]
@SmiffyJ I tried but unfortunately am not able to reproduce this. Could you please share (gist/attachment) the contents of ~/.cluster-api/overrides/cluster-api/v0.3.0/core-components.yaml?
/priority awaiting-more-evidence
Sure, here's a link
https://gist.github.com/SmiffyJ/aa5d5c55bbc61f3c0298e07e80a72426
/milestone Next
having the exact issue, not sure what to do?
Any reason to use v0.3.0 instead of v0.3.7 which was the last stable release?
This is what ./cmd/clusterctl/hack/local-overrides.py has you run. I don't think the version is relevant here.
v0.3.0 is hard-coded in the script but it's just a placeholder/fixed value
Was this ever solved? having the same issue
@SmiffyJ Is it possible to get the tree of the ~/.cluster_api folder and the output of clusterctl init with -v 5 flag enabled?
@fabriziopandini Sure here's the tree
โโโ clusterctl.yaml
โโโ overrides
โโโ bootstrap-kubeadm
โย ย โโโ v0.3.0
โย ย โโโ bootstrap-components.yaml
โโโ cluster-api
โย ย โโโ v0.3.0
โย ย โโโ core-components.yaml
โโโ control-plane-kubeadm
โย ย โโโ v0.3.0
โย ย โโโ control-plane-components.yaml
โโโ infrastructure-docker
โโโ v0.3.0
โโโ infrastructure-components.yaml
โโโ metadata.yaml
And the output of that command is simply:
Fetching providers
@Emc1992 I've not found a solution - would be very interested in a fix though
the output of clusterctl init with -v 5 flag enabled?
Sorry for not being clear:
The output of clusterctl init --core cluster-api:v0.3.0 --bootstrap kubeadm:v0.3.0 --control-plane kubeadm:v0.3.0 --infrastructure docker:v0.3.0 -v 5
I've not found a solution
Could you share the solution? I'm working on https://github.com/kubernetes-sigs/cluster-api/pull/3514 and I would like to make sure to cover all the possible problems
Sorry for not being clear:
The output of clusterctl init --core cluster-api:v0.3.0 --bootstrap kubeadm:v0.3.0 --control-plane kubeadm:v0.3.0 --infrastructure docker:v0.3.0 -v 5
Ah I see! This is the output to that:
Fetching providers
Using Override="core-components.yaml" Provider="cluster-api" Version="v0.3.0"
Error: failed to get provider components for the "cluster-api:v0.3.0" provider: failed to detect default target namespace: Invalid manifest. There should be no more than one resource with Kind Namespace in the provider components yaml
sigs.k8s.io/cluster-api/cmd/clusterctl/client.(*clusterctlClient).addToInstaller
/Users/joe/Documents/code/cluster-api/cmd/clusterctl/client/init.go:251
sigs.k8s.io/cluster-api/cmd/clusterctl/client.(*clusterctlClient).setupInstaller
/Users/joe/Documents/code/cluster-api/cmd/clusterctl/client/init.go:183
sigs.k8s.io/cluster-api/cmd/clusterctl/client.(*clusterctlClient).Init
/Users/joe/Documents/code/cluster-api/cmd/clusterctl/client/init.go:92
sigs.k8s.io/cluster-api/cmd/clusterctl/cmd.runInit
/Users/joe/Documents/code/cluster-api/cmd/clusterctl/cmd/init.go:146
sigs.k8s.io/cluster-api/cmd/clusterctl/cmd.glob..func6
/Users/joe/Documents/code/cluster-api/cmd/clusterctl/cmd/init.go:88
github.com/spf13/cobra.(*Command).execute
/Users/joe/go/pkg/mod/github.com/spf13/[email protected]/command.go:840
github.com/spf13/cobra.(*Command).ExecuteC
/Users/joe/go/pkg/mod/github.com/spf13/[email protected]/command.go:945
github.com/spf13/cobra.(*Command).Execute
/Users/joe/go/pkg/mod/github.com/spf13/[email protected]/command.go:885
sigs.k8s.io/cluster-api/cmd/clusterctl/cmd.Execute
/Users/joe/Documents/code/cluster-api/cmd/clusterctl/cmd/root.go:52
main.main
/Users/joe/Documents/code/cluster-api/cmd/clusterctl/main.go:25
runtime.main
/usr/local/Cellar/go/1.14.5/libexec/src/runtime/proc.go:203
runtime.goexit
/usr/local/Cellar/go/1.14.5/libexec/src/runtime/asm_amd64.s:1373
Could you share the solution? I'm working on #3514 and I would like to make sure to cover all the possible problems
Sorry I have not found a solution yet
provider: failed to detect default target namespace: Invalid manifest. There should be no more than one resource with Kind Namespace in the provider components yaml
Could you provide the following file?
โโโ overrides
โโโ cluster-api
โ โโโ v0.3.0
โ โโโ core-components.yaml
@fabriziopandini Yep that's here:
https://gist.github.com/SmiffyJ/aa5d5c55bbc61f3c0298e07e80a72426
There is something weird in your setup, but I can't understand it.
The manifest generated for you contains:
apiVersion: v1
kind: Namespace
metadata:
labels:
cluster.x-k8s.io/provider: cluster-api
control-plane: controller-manager
name: system
---
apiVersion: v1
kind: Namespace
metadata:
labels:
cluster.x-k8s.io/provider: cluster-api
control-plane: controller-manager
name: webhook-system
....
While it should contain
apiVersion: v1
kind: Namespace
metadata:
labels:
cluster.x-k8s.io/provider: cluster-api
control-plane: controller-manager
name: capi-system
---
apiVersion: v1
kind: Namespace
metadata:
labels:
cluster.x-k8s.io/provider: cluster-api
control-plane: controller-manager
name: capi-webhook-system
....
(with the capi prefix in front of each name)
Looks like this might depend on your kustomize version. mine is 3.1.0 (IF I remember well any version > 3 should work)
This look very similar to my issue, I had to tweak the namespaces manually
It's working!
@fabriziopandini You were correct - I first manually changed the names of both Namespaces to include the capi- prefix, which worked. I then reinstalled kustomize with homebrew @3.8.1 and the override Python script produced the correct core-components.yaml file.
Thanks for the help
yay! happy to help!
/close
@fabriziopandini: Closing this issue.
In response to this:
yay! happy to help!
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Most helpful comment
There is something weird in your setup, but I can't understand it.
The manifest generated for you contains:
While it should contain
(with the capi prefix in front of each name)
Looks like this might depend on your kustomize version. mine is 3.1.0 (IF I remember well any version > 3 should work)