What happened:
A newly created cluster will have
What you expected to happen:
For the cluster to allow me to access it
How to reproduce it (as minimally and precisely as possible):
kind localserver```{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
"reason": "Forbidden",
"details": {
},
"code": 403
}
**Anything else we need to know?**:
Output of `k cluster-info dump | grep error `:
$๎ฐ (โ kubernetes-admin@development|default) ~/dotfiles> k cluster-info dump | grep error โ no config
I0623 20:11:59.750151 1 log.go:172] http: TLS handshake error from 172.17.0.1:40802: remote error: tls: unknown certificate
I0623 20:11:59.778107 1 log.go:172] http: TLS handshake error from 172.17.0.1:40804: remote error: tls: unknown certificate
I0623 20:11:59.784042 1 log.go:172] http: TLS handshake error from 172.17.0.1:40806: remote error: tls: unknown certificate
I0623 20:12:00.798037 1 log.go:172] http: TLS handshake error from 172.17.0.1:40814: remote error: tls: unknown certificate
I0623 20:12:00.802912 1 log.go:172] http: TLS handshake error from 172.17.0.1:40812: remote error: tls: unknown certificate
I0623 20:12:04.167665 1 log.go:172] http: TLS handshake error from 172.17.0.1:40836: remote error: tls: unknown certificate
I0623 20:12:04.168768 1 log.go:172] http: TLS handshake error from 172.17.0.1:40838: remote error: tls: unknown certificate
I0623 20:12:04.174608 1 log.go:172] http: TLS handshake error from 172.17.0.1:40840: EOF
E0623 20:10:44.741316 1 leaderelection.go:306] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "endpoints" in API group "" in the namespace "kube-system"
E0623 20:10:48.943720 1 resource_quota_controller.go:171] initial monitor sync has error: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
E0623 20:11:04.258735 1 daemon_controller.go:302] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet", UID:"ff87ef90-95f2-11e9-992b-0242ac110002", ResourceVersion:"237", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63696917451, loc:(time.Location)(0x722ae00)}}, DeletionTimestamp:(v1.Time)(nil), DeletionGracePeriodSeconds:(int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(v1.LabelSelector)(0xc001893d80), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, DeletionTimestamp:(v1.Time)(nil), DeletionGracePeriodSeconds:(int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(v1.HostPathVolumeSource)(0xc001893da0), EmptyDir:(v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(v1.GitRepoVolumeSource)(nil), Secret:(v1.SecretVolumeSource)(nil), NFS:(v1.NFSVolumeSource)(nil), ISCSI:(v1.ISCSIVolumeSource)(nil), Glusterfs:(v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(v1.RBDVolumeSource)(nil), FlexVolume:(v1.FlexVolumeSource)(nil), Cinder:(v1.CinderVolumeSource)(nil), CephFS:(v1.CephFSVolumeSource)(nil), Flocker:(v1.FlockerVolumeSource)(nil), DownwardAPI:(v1.DownwardAPIVolumeSource)(nil), FC:(v1.FCVolumeSource)(nil), AzureFile:(v1.AzureFileVolumeSource)(nil), ConfigMap:(v1.ConfigMapVolumeSource)(nil), VsphereVolume:(v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(v1.QuobyteVolumeSource)(nil), AzureDisk:(v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(v1.ProjectedVolumeSource)(nil), PortworxVolume:(v1.PortworxVolumeSource)(nil), ScaleIO:(v1.ScaleIOVolumeSource)(nil), StorageOS:(v1.StorageOSVolumeSource)(nil), CSI:(v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"kindest/kindnetd:0.1.0", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(v1.EnvVarSource)(0xc001893dc0)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(v1.EnvVarSource)(0xc001893e00)}, v1.EnvVar{Name:"CNI_CONFIG_TEMPLATE", Value:"", ValueFrom:(v1.EnvVarSource)(0xc001893e40)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(v1.Probe)(nil), ReadinessProbe:(v1.Probe)(nil), Lifecycle:(v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(v1.SecurityContext)(0xc000461f40), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(int64)(0xc000a336a8), ActiveDeadlineSeconds:(int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(bool)(nil), SecurityContext:(v1.PodSecurityContext)(0xc001bc0180), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(int32)(nil), DNSConfig:(v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(string)(nil), EnableServiceLinks:(bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"OnDelete", RollingUpdate:(v1.RollingUpdateDaemonSet)(nil)}, MinReadySeconds:0, RevisionHistoryLimit:(int32)(0xc000a33738)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
"message": "CRI error: /sys is read-only: cannot modify conntrack limits, problems may arise later (If running Docker, see docker issue #24000)",
```
Environment:
kind version): v0.3.0kubectl version): 1.14.1docker info): 18.09.2/etc/os-release): Mac OS High SierraFor the cluster to allow me to access it
Are you following the export KUBECONFIG=$(kind get kubeconfig-path --name=...) output from kind create cluster?
I'm not sure if we should be allowing anonymous access by default, I think we're just following kubeadm on that front.
```
$๎ฐ (โ kubernetes-admin@development|default) ~> echo $KUBECONFIG
/Users/carlisia/.kube/kind-config-development:/Users/carlisia/.kube/kind-config-staging
carlisia ๎ฐ carlisiac-a01 ๎ฐ ~ ๎ฐ $ ๎ฐ
$๎ฐ (โ kubernetes-admin@development|default) ~> cat /Users/carlisia/.kube/kind-config-development โ no config
apiVersion: v1
clusters:
How am I enabling anonymous access? I'm not trying to do this on purpose, not sure what's misconfigured.
How am I enabling anonymous access? I'm not trying to do this on purpose, not sure what's misconfigured.
er not that you are, that kind / kubeadm is not, and this part:
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
suggests that whatever made this API call is trying to use anonymous access.
$๎ฐ (โ kubernetes-admin@development|default) ~> kind get kubeconfig-path --name "development" โ no config
/Users/carlisia/.kube/kind-config-development
observation: the default k8s version for kind v0.3.0 should be 1.14.2 and not 1.14.1.
was kind create cluster called with --name "development"?
Ok so maybe I shouldn't be making that call. I was just poking around.
The concern is the output of the kubectl cluster-info dump
was kind create cluster called with --name "development"?
Yes
i can't seem to reproduce the problem. also tried applying a crd.
cd kind
git checkout v0.3.0
GO111MODULE=on go build
kind create cluster
export KUBECONFIG="$(kind get kubeconfig-path --name="kind")"
# create crd.yaml from https://v1-14.docs.kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/
kubectl apply -f crd.yaml
customresourcedefinition.apiextensions.k8s.io/crontabs.stable.example.com created
kubectl get crd
NAME CREATED AT
crontabs.stable.example.com 2019-06-23T20:37:06Z
Try to access the kind localserver
please clarify what operations you are trying to perform.
Kubernetes version: (use kubectl version): 1.14.1
why is this version 1.14.1 and not 1.14.2 which kind 0.3.0 should use by default?
/priority awaiting-more-evidence
I'll update kubernetes now and see.
I'll update kubernetes now and see.
unless you are calling kind build node-image you don't have to.
kind create cluster for 0.3.0 should download a pre-built node image kindest/node:v1.14.2
what is the output of your kind create cluster command?
```Creating cluster "kind" ...
โ Ensuring node image (kindest/node:v1.14.2) ๐ผ
โ Preparing nodes ๐ฆ
โ Creating kubeadm config ๐
โ Starting control-plane ๐น๏ธ
โ Installing CNI ๐
โ Installing StorageClass ๐พ
Cluster creation complete. You can now use the cluster with:
export KUBECONFIG="$(kind get kubeconfig-path --name="kind")"
kubectl cluster-info```
it's using โ Ensuring node image (kindest/node:v1.14.2) ๐ผ which is a prebuilt node image.
so updating kubernetes/kubernetes will have no effect.
the node-image is used to create a container that will host a kubernetes node with docker.
the node can be e.g a control-plane or a worker.
I can create a CRD, but then it isn't there?
```bash-3.2$ kind create cluster
Creating cluster "kind" ...
โ Ensuring node image (kindest/node:v1.14.2) ๐ผ
โ Preparing nodes ๐ฆ
โ Creating kubeadm config ๐
โ Starting control-plane ๐น๏ธ
โ Installing CNI ๐
โ Installing StorageClass ๐พ
Cluster creation complete. You can now use the cluster with:
export KUBECONFIG="$(kind get kubeconfig-path --name="kind")"
kubectl cluster-info
bash-3.2$ export KUBECONFIG="$(kind get kubeconfig-path --name="kind")"
bash-3.2$ kubectl apply -f ~/work/src/github.com/heptio/velero/examples/minio/00-minio-deployment.yaml
namespace/velero created
deployment.apps/minio created
service/minio created
job.batch/minio-setup created
bash-3.2$ kubectl get crd
No resources found.
bash-3.2$
```
bash-3.2$ kubectl cluster-info dump | grep error
E0623 20:54:53.979648 1 leaderelection.go:306] error retrieving resource lock kube-system/kube-controller-manager: Get https://172.17.0.3:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 172.17.0.3:6443: connect: connection refused
E0623 20:54:59.912797 1 leaderelection.go:306] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "endpoints" in API group "" in the namespace "kube-system"
E0623 20:55:17.074423 1 resource_quota_controller.go:171] initial monitor sync has error: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
"message": "CRI error: /sys is read-only: cannot modify conntrack limits, problems may arise later (If running Docker, see docker issue #24000)",
bash-3.2$
"message": "CRI error: /sys is read-only: cannot modify conntrack limits, problems may arise later (If running Docker, see docker issue #24000)",
this is expected and ignorable, unfortunately I'm not aware of any way to silence it. *
* we make /sys read only so your node containers don't muck with things they shouldn't on the host.
E0623 20:54:53.979648 1 leaderelection.go:306] error retrieving resource lock kube-system/kube-controller-manager: Get https://172.17.0.3:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 172.17.0.3:6443: connect: connection refused
E0623 20:54:59.912797 1 leaderelection.go:306] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "endpoints" in API group "" in the namespace "kube-system"
E0623 20:55:17.074423 1 resource_quota_controller.go:171] initial monitor sync has error: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
have not been able to replicate this yet, but if it's just happening during startup it's probably not an issue, the connections etc. on startup are fairly racy, that's just Kubernetes ๐ฌ, should sort itself out in a future reconcile loop etc... (in theory, if this is persisting then those errors might actually be a problem...)
I can create a CRD, but then it isn't there?
so this part of your output:
bash-3.2$ kubectl apply -f ~/work/src/github.com/heptio/velero/examples/minio/00-minio-> deployment.yaml
namespace/velero created
deployment.apps/minio created
service/minio created
job.batch/minio-setup created
... suggests that no CRDs were created? https://github.com/heptio/velero/blob/master/examples/minio/00-minio-deployment.yaml also does not contain any CRDs
How about the is forbidden error? Is this expected?
kubectl cluster-info dump can return a number of errors until the control plane is properly up.
Got it. So, it's been up for a while, and now I'm back at having the unknown certificate TLS error.
unknown certificate authority ring any bell?
Ohhh. Is it trying to use the internet to validate the certificate authority? If so, then that's what the problem is (probably).
i wouldn't look at cluster-info dump.
are you getting errors during kubectl apply or by examining the resources them self?
See this: https://github.com/kubernetes-sigs/kind/issues/643#issuecomment-504786303
I can create a CRD, but then it's not found.
bash-3.2$ kubectl get crd
No resources found.
but the applied YAML does not have any CRDs?
https://github.com/heptio/velero/blob/master/examples/minio/00-minio-deployment.yaml
a CustomResourceDefinition looks like this https://v1-14.docs.kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/#create-a-customresourcedefinition
Gosh. You're right. So maybe my cluster has been working properly this whole time.
i'm not 100% sure what minio is but the service should be up after you apply that manifest:
kubectl get services -n velero
After facepalming myself for a minute, I remembered how this started. I tried to apply that yaml (the minio svc, not a CRD - my bad) and was getting a response saying unauthorized. So I started poking around and got alarmed with the outputs I pasted (which were irrelevant).
Obviously I'm not getting the unauthorized anymore since I was able to create the service.
I super appreciate you trying to resolve this. Closing!
no problem, glad we were able to resolve this.
thanks everyone! :-)
Most helpful comment
After facepalming myself for a minute, I remembered how this started. I tried to apply that yaml (the minio svc, not a CRD - my bad) and was getting a response saying
unauthorized. So I started poking around and got alarmed with the outputs I pasted (which were irrelevant).Obviously I'm not getting the
unauthorizedanymore since I was able to create the service.I super appreciate you trying to resolve this. Closing!