Kind: Unknown certificate error

Created on 23 Jun 2019  ยท  31Comments  ยท  Source: kubernetes-sigs/kind

What happened:
A newly created cluster will have

What you expected to happen:
For the cluster to allow me to access it

How to reproduce it (as minimally and precisely as possible):

  • Install kind
  • Create a cluster
  • Try to access the kind localserver
  • Get:

```{
"kind": "Status",
"apiVersion": "v1",
"metadata": {

},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
"reason": "Forbidden",
"details": {

},
"code": 403
}


**Anything else we need to know?**:
Output of `k cluster-info dump | grep error `:

$๎‚ฐ (โŽˆ kubernetes-admin@development|default) ~/dotfiles> k cluster-info dump | grep error โŽˆ no config
I0623 20:11:59.750151 1 log.go:172] http: TLS handshake error from 172.17.0.1:40802: remote error: tls: unknown certificate
I0623 20:11:59.778107 1 log.go:172] http: TLS handshake error from 172.17.0.1:40804: remote error: tls: unknown certificate
I0623 20:11:59.784042 1 log.go:172] http: TLS handshake error from 172.17.0.1:40806: remote error: tls: unknown certificate
I0623 20:12:00.798037 1 log.go:172] http: TLS handshake error from 172.17.0.1:40814: remote error: tls: unknown certificate
I0623 20:12:00.802912 1 log.go:172] http: TLS handshake error from 172.17.0.1:40812: remote error: tls: unknown certificate
I0623 20:12:04.167665 1 log.go:172] http: TLS handshake error from 172.17.0.1:40836: remote error: tls: unknown certificate
I0623 20:12:04.168768 1 log.go:172] http: TLS handshake error from 172.17.0.1:40838: remote error: tls: unknown certificate
I0623 20:12:04.174608 1 log.go:172] http: TLS handshake error from 172.17.0.1:40840: EOF
E0623 20:10:44.741316 1 leaderelection.go:306] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "endpoints" in API group "" in the namespace "kube-system"
E0623 20:10:48.943720 1 resource_quota_controller.go:171] initial monitor sync has error: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
E0623 20:11:04.258735 1 daemon_controller.go:302] kube-system/kindnet failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kindnet", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kindnet", UID:"ff87ef90-95f2-11e9-992b-0242ac110002", ResourceVersion:"237", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63696917451, loc:(time.Location)(0x722ae00)}}, DeletionTimestamp:(v1.Time)(nil), DeletionGracePeriodSeconds:(int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(v1.LabelSelector)(0xc001893d80), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(time.Location)(nil)}}, DeletionTimestamp:(v1.Time)(nil), DeletionGracePeriodSeconds:(int64)(nil), Labels:map[string]string{"app":"kindnet", "k8s-app":"kindnet", "tier":"node"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"cni-cfg", VolumeSource:v1.VolumeSource{HostPath:(v1.HostPathVolumeSource)(0xc001893da0), EmptyDir:(v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(v1.GitRepoVolumeSource)(nil), Secret:(v1.SecretVolumeSource)(nil), NFS:(v1.NFSVolumeSource)(nil), ISCSI:(v1.ISCSIVolumeSource)(nil), Glusterfs:(v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(v1.RBDVolumeSource)(nil), FlexVolume:(v1.FlexVolumeSource)(nil), Cinder:(v1.CinderVolumeSource)(nil), CephFS:(v1.CephFSVolumeSource)(nil), Flocker:(v1.FlockerVolumeSource)(nil), DownwardAPI:(v1.DownwardAPIVolumeSource)(nil), FC:(v1.FCVolumeSource)(nil), AzureFile:(v1.AzureFileVolumeSource)(nil), ConfigMap:(v1.ConfigMapVolumeSource)(nil), VsphereVolume:(v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(v1.QuobyteVolumeSource)(nil), AzureDisk:(v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(v1.ProjectedVolumeSource)(nil), PortworxVolume:(v1.PortworxVolumeSource)(nil), ScaleIO:(v1.ScaleIOVolumeSource)(nil), StorageOS:(v1.StorageOSVolumeSource)(nil), CSI:(v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kindnet-cni", Image:"kindest/kindnetd:0.1.0", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"HOST_IP", Value:"", ValueFrom:(v1.EnvVarSource)(0xc001893dc0)}, v1.EnvVar{Name:"POD_IP", Value:"", ValueFrom:(v1.EnvVarSource)(0xc001893e00)}, v1.EnvVar{Name:"CNI_CONFIG_TEMPLATE", Value:"", ValueFrom:(v1.EnvVarSource)(0xc001893e40)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(inf.Dec)(nil)}, s:"50Mi", Format:"BinarySI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"cni-cfg", ReadOnly:false, MountPath:"/etc/cni/net.d", SubPath:"", MountPropagation:(v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(v1.Probe)(nil), ReadinessProbe:(v1.Probe)(nil), Lifecycle:(v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(v1.SecurityContext)(0xc000461f40), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(int64)(0xc000a336a8), ActiveDeadlineSeconds:(int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"kindnet", DeprecatedServiceAccount:"kindnet", AutomountServiceAccountToken:(bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(bool)(nil), SecurityContext:(v1.PodSecurityContext)(0xc001bc0180), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"NoSchedule", TolerationSeconds:(int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(int32)(nil), DNSConfig:(v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(string)(nil), EnableServiceLinks:(bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"OnDelete", RollingUpdate:(v1.RollingUpdateDaemonSet)(nil)}, MinReadySeconds:0, RevisionHistoryLimit:(int32)(0xc000a33738)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kindnet": the object has been modified; please apply your changes to the latest version and try again
"message": "CRI error: /sys is read-only: cannot modify conntrack limits, problems may arise later (If running Docker, see docker issue #24000)",
```

Environment:

  • kind version: (use kind version): v0.3.0
  • Kubernetes version: (use kubectl version): 1.14.1
  • Docker version: (use docker info): 18.09.2
  • OS (e.g. from /etc/os-release): Mac OS High Sierra
kinbug prioritawaiting-more-evidence

Most helpful comment

After facepalming myself for a minute, I remembered how this started. I tried to apply that yaml (the minio svc, not a CRD - my bad) and was getting a response saying unauthorized. So I started poking around and got alarmed with the outputs I pasted (which were irrelevant).

Obviously I'm not getting the unauthorized anymore since I was able to create the service.

I super appreciate you trying to resolve this. Closing!

All 31 comments

For the cluster to allow me to access it

Are you following the export KUBECONFIG=$(kind get kubeconfig-path --name=...) output from kind create cluster?

I'm not sure if we should be allowing anonymous access by default, I think we're just following kubeadm on that front.

```
$๎‚ฐ (โŽˆ kubernetes-admin@development|default) ~> echo $KUBECONFIG
/Users/carlisia/.kube/kind-config-development:/Users/carlisia/.kube/kind-config-staging

carlisia ๎‚ฐ carlisiac-a01 ๎‚ฐ ~ ๎‚ฐ $ ๎‚ฐ
$๎‚ฐ (โŽˆ kubernetes-admin@development|default) ~> cat /Users/carlisia/.kube/kind-config-development โŽˆ no config
apiVersion: v1
clusters:

  • cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRFNU1EWXlNekl3TVRBeU5Wb1hEVEk1TURZeU1ESXdNVEF5TlZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTDNyCkEwa2t3R2hjbGxKdVhRblJia1E5NEpCNlRqZHVXcFJ0ZXg4SXdKaXZmQVZPbXRKdFBiWWdoY05GUFRLRCtTelEKNXY1Y2I0NDJFWnZweHF6ekZMKzl1bXF5NzdOU251Wnp5YzR2d2pLMkJtdzlLcEhSTWZ0WUY1SmwyTWJJOVcrdwpBK1F4MVFoc3l2NzE2UjhWbzhybE9IVkpIYzh4SUpOUmtENFUrSThZNFFWa2JPQktHeHYvckRFd2gyelVJM3N0CkJEWEdYY2ZVVEpxYldDMmpiNTZFSWFKSTYxU3crNDB1dG9wRm51RXdoVnVPYXZLMmVCYU1WVzBVL1ZITkFwcE8KMVM4aG1MNW1ONUR1b0gyeXlkeGtmMVVQeEQvdStHMnRmUFZSTUw2T2FmOGhjaUNmWVA1MElJMmprWVBrUkN6QgpsNmxQeEV2TkFVd095cXJtUXQ4Q0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFEbzdPUFhQaW9FWVh2WXNjNmRiM2J4d1JBR2QKbVIwT2tDMVBzRHp3dVdSRlpkamdoTDdhdndDQjNiRU5NS3FXUTFBa095RUNMRHJUbkVDMTJldkdmRDA2U3FDQQo2SGp6TjZKcm5EY205YWlKbFlTQmhzU1k5OGpkbDZ2VVlwblFaVlVEL05xL1NJNk5HTjZ4V3JZQmVQaHdBdktKCkZGTnU3MFVNaXU5NHk3MGI4Y0prb09hVjYvQUpQcDJSZm55elhiUzBUNkhZeVJqUlFTTXVmdUdPRWxoWWZIeWIKcWV1OWhFMHZJMXU0MFZ4TFFZcU1hVFRzVjVXNy8xUzkyb1VkMHU1NVorWGxFRm8zaXBWUkRrWWsvMWo4Y3Y0TApMeXF6UzVaRGFjNENTOWhadFhiOWRrNkI3alAwb0dJUEdOa1MzU2tFUlVKWGNhZ2dFVVVvWXViYkRmQT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    server: https://localhost:51672
    name: development
    contexts:
  • context:
    cluster: development
    user: kubernetes-admin
    name: kubernetes-admin@development
    current-context: kubernetes-admin@development
    kind: Config
    preferences: {}
    users:
  • name: kubernetes-admin
    user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM4akNDQWRxZ0F3SUJBZ0lJTWlua0FLVjI1bGt3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB4T1RBMk1qTXlNREV3TWpWYUZ3MHlNREEyTWpJeU1ERXdNamhhTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXZMTXVpNEcvR2RPSi9RQkoKVHhHaVoyU28vc0Z6dlhPZ3Y4UDVPT05ZazVGbGVZcS9ZcnMyQmNhSjVWNUlKZHV2a3dTRlh2UmRhQlRlSkp1dgo3a1ZRc0ZCMEttczh2UExWT2ltRm9Oc1MrQ3BNK1V5dzdMbnBKQXNBaWVzanllT05SZW5EaDRWVG1Kb0hhVzhECld3anplNTNlMWplWEJOVEVtVm82WTE0UU1ycEpITkdsZU9MNEVLTlFCSVF1SEl6cElicWEvd0R5U2w4RzMyUjMKcS9hV0wxeDQyenk4NHVRZkR0RTZOa0hwbHlLeUJaK2RGSWhpTU1uOC9VR1N3Znl3ejNBUitaL3NBeTkxOGVaaQplOGhKL21rZ1BKOUVFNDFJamJBTDkzc093NUlSOWVlRGRrNWtrai9ZS2FlRHpBUkU1N2IzN1ZYKzZMeVh3bndkCjg2RmdLd0lEQVFBQm95Y3dKVEFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFGbmdmK3YvWllwQ3gvcU54U1N3cjJob0I2TWUzYXEyMEhmbApNN2xJV3lJUzAxeHkzcU1VNHZhRjlGUzdxYzFXcm14bkJueXdnWS9LQWxGbTJhUUFUTG9qRXErNWpGYkMwdFlSCnpvNlJPMDZQOVlWelVXcC9VaTlDMFhBM1FmVmdZZWcxakx6MEdFUWxMcTdOeTdMeTRoUldkU00vRUlsc2xqN3UKUFkzWUExNE1hNVlkUWVOM1lUV0paZHVrNnpxcTZGNlc4QVpQY2lrbStvRFMzS2FHTk9TSnYxQ0hkNlZHdDVmTQpGOFZrMmJ1OHo2bnJ0aS9uQmdEV0JMZHpaczg0QS90ZW03ZXZNWTBVRHZYWk96MTdmNjBuVG0rSFg1L2lVQnRvCjU5Q3dRczB3dHFvNW44aWp1MEpUY1NLakxPVXROcUg3RVJGNHhsQXhscTNqSFI3c0RvND0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBdkxNdWk0Ry9HZE9KL1FCSlR4R2laMlNvL3NGenZYT2d2OFA1T09OWWs1RmxlWXEvCllyczJCY2FKNVY1SUpkdXZrd1NGWHZSZGFCVGVKSnV2N2tWUXNGQjBLbXM4dlBMVk9pbUZvTnNTK0NwTStVeXcKN0xucEpBc0FpZXNqeWVPTlJlbkRoNFZUbUpvSGFXOERXd2p6ZTUzZTFqZVhCTlRFbVZvNlkxNFFNcnBKSE5HbAplT0w0RUtOUUJJUXVISXpwSWJxYS93RHlTbDhHMzJSM3EvYVdMMXg0Mnp5ODR1UWZEdEU2TmtIcGx5S3lCWitkCkZJaGlNTW44L1VHU3dmeXd6M0FSK1ovc0F5OTE4ZVppZThoSi9ta2dQSjlFRTQxSWpiQUw5M3NPdzVJUjllZUQKZGs1a2tqL1lLYWVEekFSRTU3YjM3VlgrNkx5WHdud2Q4NkZnS3dJREFRQUJBb0lCQUZGbEJPVytRSjAyUnlZdgpzbTk5enN6RWViVHg0eWZNTVlHbVdlRTFCNmNYcDJyRzg0ajE4ZmFKemo5MjdLNFAxZXNYbnlQM1NqYzBFU0kvCldhTHdtVDZFWmFkS2ZIZVFVM04wSjZUYitwRzdSVnFmdncxTm9BZ2hDc2x5K1F3RHNKT0Fvd3ZZOGRjNFVZd2sKQzVHQUNlNi9pVGhqNEN1QVQ1RktmemNQZ3ZNdDgrMXk5bFlDVDZDZWVaK2tsSHVDNDZVVmNveWJIRk5ZeVljZQpMblZUcjNrOVBTYWw0UXdDcm0zWU5VWHZZWXBoWXl3U0FBa2g4dWZucTNVZlQyK044YjlTNW9yaExwVHQ0Njg2CkpJK0tndDNpS0hOT1RlbnBoY3NKYlZTRkdDSEVPaE1VMkxQcW5scWQ4QmNtSWhHZWpOZWZZMWdFbWdWQkhoZHcKRStwVW5RRUNnWUVBMlRZR0hHYzk0WmcxbENnSFUrWitBTEwvWXBVcWpJWFVhTEdScVNFZ05uZlYyMXZFQnVPbAo4dnQvcGE1LzJTakoxTHBoSUZ3Q3NXc0dMUHMwckk2M2l2bU81UFhubjJ5TStZMTJqcmVDSmpVeCt1YTJRYzZYCjR3MmpzR1hRTldHQVU3YXF2b3hUb1krNDJCZVowc0o5TmpDV3MwUzVjbkJlQ2FQcVMvam5sb0VDZ1lFQTNtWEEKQndKTVk1S1hFQVFyS3R5ZWhmbyt1VWx0L2tGU00zYzBxTDlsY3YwS2krVDEwZ0gva0tkeTExN0s0Vm9RQ3BHdAowYVU1YmtrbXJsdTdqZkZ3akw1bzllY1BoV0J1WDNuV1hseEIvUTlBMElGVGdQVjJXc09SRmdlU0h1cExGaFlHClgyUVRwUkd2S0JacWwvYTdKdElKdUN2bER3Q3E5TTdMaDY1ejJLc0NnWUE4R3ZYdjhDV3dnbVQ1SFdhQnNmdFcKQ0RJaFBuT3F0UEhGRXJYaTNqYkN1OEJpMWU3VmxUTDduTnFDcDFuYlpxMEsvNVFXMXo4cmh4a0xZMnY4Ly9VTQpNT2g0dFE4bUQyeW5OWjBEK3dXNXV1aWNyRERzM3RVcTBFQm1kSlg3MzRJYUtDYnhXWFZlOUoxS3RxVXJMQVJuCjlXUU9NVXM3dnBwWEFwTzMrQ1ZsZ1FLQmdFT204Q014cjhzYWJKbVNxdzcremJvenhhRFhsWDRpb0w3SEpGMncKMjB0L2JoWGdNR2NSOUl3c1krTGdFeGM2TG1jSXFiZDhhMXdCSktNbGhJaEpTZE9HbUtjMUFxT3dFZU01VE55bgpjK3RuR0hCVTV2SHp1VzBpMEovQzdkQTV0VjJpbFkydkE4clM5bFZiZkZGOTNMQ1NkQ0p5Tjl1NGVFakFIMm5HCng3YkJBb0dCQUxCMnZIanh1T0JxMWtMTGFkQ3UrSXlFYm1BM3owOVpIdXlTQ3hNYXRBR3YvOXNGSy9aWDRVcS8KMDY4SmE0NWNzclc0cVdqc1h6UVY5MHkvUFo2d1V1cmpqbmNEQzNiSlp4VWxZY1hDWTRDUlgyR1lnUTIva1dnVQpEWjBuRVBYU1plUFFvQUY2UkRPdXF0b0ZxcWRMSWtkMzNCODNhbS9QZjhCd1lFdU5mb0NTCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
    carlisia ๎‚ฐ carlisiac-a01 ๎‚ฐ ~ ๎‚ฐ $ ๎‚ฐ```

How am I enabling anonymous access? I'm not trying to do this on purpose, not sure what's misconfigured.

How am I enabling anonymous access? I'm not trying to do this on purpose, not sure what's misconfigured.

er not that you are, that kind / kubeadm is not, and this part:

"message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",

suggests that whatever made this API call is trying to use anonymous access.

$๎‚ฐ  (โŽˆ kubernetes-admin@development|default) ~> kind get kubeconfig-path --name "development"                                                                                                                                      โŽˆ no config
/Users/carlisia/.kube/kind-config-development

observation: the default k8s version for kind v0.3.0 should be 1.14.2 and not 1.14.1.

was kind create cluster called with --name "development"?

Ok so maybe I shouldn't be making that call. I was just poking around.

The concern is the output of the kubectl cluster-info dump

was kind create cluster called with --name "development"?

Yes

i can't seem to reproduce the problem. also tried applying a crd.

cd kind
git checkout v0.3.0
GO111MODULE=on go build
kind create cluster
export KUBECONFIG="$(kind get kubeconfig-path --name="kind")"

# create crd.yaml from https://v1-14.docs.kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/

kubectl apply -f crd.yaml 
customresourcedefinition.apiextensions.k8s.io/crontabs.stable.example.com created

kubectl get crd
NAME                          CREATED AT
crontabs.stable.example.com   2019-06-23T20:37:06Z

Try to access the kind localserver

please clarify what operations you are trying to perform.

Kubernetes version: (use kubectl version): 1.14.1

why is this version 1.14.1 and not 1.14.2 which kind 0.3.0 should use by default?

/priority awaiting-more-evidence

I'll update kubernetes now and see.

I'll update kubernetes now and see.

unless you are calling kind build node-image you don't have to.

kind create cluster for 0.3.0 should download a pre-built node image kindest/node:v1.14.2

what is the output of your kind create cluster command?

```Creating cluster "kind" ...
โœ“ Ensuring node image (kindest/node:v1.14.2) ๐Ÿ–ผ
โœ“ Preparing nodes ๐Ÿ“ฆ
โœ“ Creating kubeadm config ๐Ÿ“œ
โœ“ Starting control-plane ๐Ÿ•น๏ธ
โœ“ Installing CNI ๐Ÿ”Œ
โœ“ Installing StorageClass ๐Ÿ’พ
Cluster creation complete. You can now use the cluster with:

export KUBECONFIG="$(kind get kubeconfig-path --name="kind")"
kubectl cluster-info```

it's using โœ“ Ensuring node image (kindest/node:v1.14.2) ๐Ÿ–ผ which is a prebuilt node image.
so updating kubernetes/kubernetes will have no effect.

the node-image is used to create a container that will host a kubernetes node with docker.
the node can be e.g a control-plane or a worker.

I can create a CRD, but then it isn't there?

```bash-3.2$ kind create cluster
Creating cluster "kind" ...
โœ“ Ensuring node image (kindest/node:v1.14.2) ๐Ÿ–ผ
โœ“ Preparing nodes ๐Ÿ“ฆ
โœ“ Creating kubeadm config ๐Ÿ“œ
โœ“ Starting control-plane ๐Ÿ•น๏ธ
โœ“ Installing CNI ๐Ÿ”Œ
โœ“ Installing StorageClass ๐Ÿ’พ
Cluster creation complete. You can now use the cluster with:

export KUBECONFIG="$(kind get kubeconfig-path --name="kind")"
kubectl cluster-info
bash-3.2$ export KUBECONFIG="$(kind get kubeconfig-path --name="kind")"
bash-3.2$ kubectl apply -f ~/work/src/github.com/heptio/velero/examples/minio/00-minio-deployment.yaml
namespace/velero created
deployment.apps/minio created
service/minio created
job.batch/minio-setup created
bash-3.2$ kubectl get crd
No resources found.
bash-3.2$
```

bash-3.2$ kubectl cluster-info dump | grep error
E0623 20:54:53.979648       1 leaderelection.go:306] error retrieving resource lock kube-system/kube-controller-manager: Get https://172.17.0.3:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 172.17.0.3:6443: connect: connection refused
E0623 20:54:59.912797       1 leaderelection.go:306] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "endpoints" in API group "" in the namespace "kube-system"
E0623 20:55:17.074423       1 resource_quota_controller.go:171] initial monitor sync has error: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
            "message": "CRI error: /sys is read-only: cannot modify conntrack limits, problems may arise later (If running Docker, see docker issue #24000)",
bash-3.2$
        "message": "CRI error: /sys is read-only: cannot modify conntrack limits, problems may arise later (If running Docker, see docker issue #24000)",

this is expected and ignorable, unfortunately I'm not aware of any way to silence it. *

* we make /sys read only so your node containers don't muck with things they shouldn't on the host.

E0623 20:54:53.979648 1 leaderelection.go:306] error retrieving resource lock kube-system/kube-controller-manager: Get https://172.17.0.3:6443/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s: dial tcp 172.17.0.3:6443: connect: connection refused
E0623 20:54:59.912797 1 leaderelection.go:306] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "endpoints" in API group "" in the namespace "kube-system"
E0623 20:55:17.074423 1 resource_quota_controller.go:171] initial monitor sync has error: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"

have not been able to replicate this yet, but if it's just happening during startup it's probably not an issue, the connections etc. on startup are fairly racy, that's just Kubernetes ๐Ÿ˜ฌ, should sort itself out in a future reconcile loop etc... (in theory, if this is persisting then those errors might actually be a problem...)

I can create a CRD, but then it isn't there?

so this part of your output:

bash-3.2$ kubectl apply -f ~/work/src/github.com/heptio/velero/examples/minio/00-minio-> deployment.yaml
namespace/velero created
deployment.apps/minio created
service/minio created
job.batch/minio-setup created

... suggests that no CRDs were created? https://github.com/heptio/velero/blob/master/examples/minio/00-minio-deployment.yaml also does not contain any CRDs

How about the is forbidden error? Is this expected?

kubectl cluster-info dump can return a number of errors until the control plane is properly up.

Got it. So, it's been up for a while, and now I'm back at having the unknown certificate TLS error.

unknown certificate authority ring any bell?

Ohhh. Is it trying to use the internet to validate the certificate authority? If so, then that's what the problem is (probably).

i wouldn't look at cluster-info dump.
are you getting errors during kubectl apply or by examining the resources them self?

See this: https://github.com/kubernetes-sigs/kind/issues/643#issuecomment-504786303

I can create a CRD, but then it's not found.

bash-3.2$ kubectl get crd
No resources found.

but the applied YAML does not have any CRDs?
https://github.com/heptio/velero/blob/master/examples/minio/00-minio-deployment.yaml

a CustomResourceDefinition looks like this https://v1-14.docs.kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/#create-a-customresourcedefinition

Gosh. You're right. So maybe my cluster has been working properly this whole time.

i'm not 100% sure what minio is but the service should be up after you apply that manifest:
kubectl get services -n velero

After facepalming myself for a minute, I remembered how this started. I tried to apply that yaml (the minio svc, not a CRD - my bad) and was getting a response saying unauthorized. So I started poking around and got alarmed with the outputs I pasted (which were irrelevant).

Obviously I'm not getting the unauthorized anymore since I was able to create the service.

I super appreciate you trying to resolve this. Closing!

no problem, glad we were able to resolve this.

thanks everyone! :-)

Was this page helpful?
0 / 5 - 0 ratings

Related issues

2opremio picture 2opremio  ยท  45Comments

matte21 picture matte21  ยท  36Comments

vincepri picture vincepri  ยท  83Comments

aojea picture aojea  ยท  40Comments

mitar picture mitar  ยท  49Comments