Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG
Please provide the following details:
Environment:
Linux
Minikube version (use minikube version):
v0.24.1
cat ~/.minikube/machines/minikube/config.json | grep DriverName):cat ~/.minikube/machines/minikube/config.json | grep -i ISO or minikube ssh cat /etc/VERSION):What happened:
I have nginx deployment and and nginx service. Opening shell on a nginx pod and trying to access it through service hangs. Trying to access through localhost works, access from pods in other deployments works fine as well.
What you expected to happen:
Accessing deployment pods through service ClusterIP should work from any pod, like it does on GKE.
How to reproduce it (as minimally and precisely as possible):
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.13.8-alpine
ports:
- containerPort: 80
---
kind: Service
apiVersion: v1
metadata:
name: nginx
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
$ kubectl exec -ti nginx-deployment-5859cdcc94-4xsqd -- wget nginx
Connecting to nginx (10.110.129.210:80)
$ kubectl exec -ti nginx-deployment-7bccf68b7c-pffpf -- wget localhost
Connecting to localhost (127.0.0.1:80)
index.html 100% |*******************************| 612 0:00:00 ETA
$ kubectl run busybox --image=busybox -ti --restart=Never -- wget nginx
Connecting to nginx (10.110.129.210:80)
index.html 100% |*******************************| 612 0:00:00 ETA
Output of minikube logs (if applicable):
Anything else do we need to know:
Tried with both localkube and kubeadm bootstrappers.
Reproduces on minikube 0.25.0.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale
Reproduces on minikube v0.28.1
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Hi guys,
This is still a real issue. I understand that there may be no resources to fix it, but at least lets keep it open.
Can someone make it fresh please?
/remove-lifecycle rotten
Still happens with minikube 1.10
I can confirm this still happens in v0.33.1, though I'm not sure why. Help wanted!
Reproduces on minikube version: v0.34.1.
Based on the symptoms described, this bug sounds very much like #1568. As such, I am marking this issue as a duplicate so that we can cooperatively solve this issue in a single location.
Please re-open if you feel this is incorrect.
Most helpful comment
Reproduces on minikube 0.25.0.