Minikube version (use minikube version): v0.28.0
cat ~/.minikube/machines/minikube/config.json | grep DriverName): virtualboxcat ~/.minikube/machines/minikube/config.json | grep -i ISO or minikube ssh cat /etc/VERSION): iso/minikube-v0.28.0.isoWhat happened: After running minikube addons enable ingress, and then kubectl creating an ingress, if you run kubectl get ing the assigned address is 10.0.2.15 and when i add this ip into my /etc/hosts file with the corresponding host, it does not hit my nginx controller running on my minikube
What you expected to happen:
The Ingress resource should be assigned the address of the minikube ip address. When I sub in the minikube ip address into the host file instead of the ip that shows up when i run kubectl get ing, then everything works correctly and i can hit the nginx controller on my minikube. When I run these exact same commands on my mac it automatically assigns the ip of the minikube as the address, so i think this is a windows specific issue.
How to reproduce it (as minimally and precisely as possible):
on a windows os, run minikube addons enable ingress, and then create and ingress resource and look at the address it gives you.
i face the same issue on Mac
I'm facing the same issue on Linux Pop!_OS, can you post commands and output of the curl to the minikube ip? I'm hitting the default backend when curling the minikube IP, as expected. However, when I add the path to my ingress, I'm still resulting in 404 not found error.
The issue for me was stemming from leaving host as a default value. Was able to get around the problem by specifying different host name like in this tutorial https://medium.com/@Oskarr3/setting-up-ingress-on-minikube-6ae825e98f82
I am facing the same issue on Mac (minikube v0.28.0). Specifing the host which I did from the beginning did not help. Always get ip address 10.0.2.15 assigned instead of the minikube ip
@tkautenburger Did you add the minikube IP and hostname to /etc/hosts? After that I curl http://hostname/ and it worked. Make sure to specify whatever path after the ‘/‘ you specified for the service in the ingress yaml file. If no path was specified then just http://hostname/ should work
Yes I did that in /etc/hosts. I suppose the minikube IP should also show up when I do a
kubectl get services
after creating the ingress object but it always resolves the hostname to 10.0.2.15 and it takes about a minute after the creation until the IP shows up at the ingress.
Von meinem iPhone gesendet
Am 13.07.2018 um 00:46 schrieb jrivera97 notifications@github.com:
Did you add the minikube IP to /etc/hosts? Should be added as . After that I curl http://hostname/ and it worked. Make sure to specify whatever path after the ‘/‘ you specified for the service in the ingress yaml file
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.
Can you post your ingress yaml file, and the output of kubectl describe for the service and the ingress?
I'm having the exact same problem running on Ubuntu 16.04.
$ minikube ip
192.168.99.100
$ kubectl get -f ingress/go-demo-2-ingress.yml
NAME HOSTS ADDRESS PORTS AGE
go-demo-2 * 10.0.2.15 80 6h
Minikube:
$ minikube version
minikube version: v0.28.2
Host:
$ uname -a
Linux NICOD 4.4.0-66-generic #87-Ubuntu SMP Fri Mar 3 15:29:05 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
Sorry for not answering so long, but I had a nice vacation in the south of France. Anyway, the good news is, i got it working, the bad news is, I don't know why. In the meantime i did the following steps:
Here is my ingress YAML script (the example is from a manning book "Kubernetes in Action"):
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kubia
spec:
rules:
- host: myhost.internal
http:
paths:
- path: /kubia
backend:
serviceName: kubia-nodeport
servicePort: 80
Here is the referenced backend nodeport service that has via the application selector three pods attached to it:
apiVersion: v1
kind: Service
metadata:
name: kubia-nodeport
spec:
type: NodePort
ports:
- port: 80
targetPort: 8080
nodePort: 30123
selector:
app: kubia
Here is my host entry in the /etc/hosts file:
192.168.99.100 myhost.internal
And now comes the strange thing. When I list the ingress controller with kubectl get ingress it still shows the 10.0.2.15 as external IP address:
NAME HOSTS ADDRESS PORTS AGE
kubia myhost.internal 10.0.2.15 80 13m
But when I curl the ingress controller by its hostname it round robin nicely thru the container apps in my three pods:
$ curl http://myhost.internal/kubia
You've hit kubia-qcl9t
$ curl http://myhost.internal/kubia
You've hit kubia-7z5mc
$ curl http://myhost.internal/kubia
You've hit kubia-r4pvp
$ curl http://myhost.internal/kubia
You've hit kubia-qcl9t
As I sad at the beginning, I have no idea what happened, it did not work before, maybe the update or the reboot did it.
@tkautenburger using the minikube IP works (that's what you're doing with the /etc/hosts file).
However, the problem still remains, the Ingress controller doesn't get the minikube IP (it gets 10.0.2.15 instead)
OK, but why is it then when I do a curl http://192.168.99.100/kubiai still get the default backend of the minikube, only when I take the hostname it gets me to the service. I don't get that.
Because of this: - host: myhost.internal
I got it. It's the network configuration of the minikube's virtual box. When I do a minikube sshto log on the virtual machine I see the following:
$ ifconfig
eth0 Link encap:Ethernet HWaddr 08:00:27:77:FA:2F
inet addr:10.0.2.15 Bcast:10.0.2.255 Mask:255.255.255.0
inet6 addr: fe80::a00:27ff:fe77:fa2f/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:92 errors:0 dropped:0 overruns:0 frame:0
TX packets:89 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:11928 (11.6 KiB) TX bytes:13054 (12.7 KiB)
eth1 Link encap:Ethernet HWaddr 08:00:27:15:04:B5
inet addr:192.168.99.100 Bcast:192.168.99.255 Mask:255.255.255.0
inet6 addr: fe80::a00:27ff:fe15:4b5/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:2 errors:0 dropped:0 overruns:0 frame:0
TX packets:22 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1180 (1.1 KiB) TX bytes:2246 (2.1 KiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:82 errors:0 dropped:0 overruns:0 frame:0
TX packets:82 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:6220 (6.0 KiB) TX bytes:6220 (6.0 KiB)
The ingress controller takes the IP address of the eth0 interface and that's why it always resolves the external IP to 10.0.2.15. Haven't managed to change the network settings of the virtual machine yet. Every time a do a change, like switching the interfaces or disable the NAT network on eth0 the minikube won't come up again.
HI everyone : )
I face the same issues, information below :
kubectl get pods -n ingress-nginx
default-http-backend-5c6d95c48-l47vp 1/1 Running 0 46m
nginx-ingress-controller-6b9b6f7957-sg28z 1/1 Running 0 46m
kubectl get ing
NAME HOSTS ADDRESS PORTS AGE
kubernetes-demo-ingress * 10.0.2.15 80 5m
apiVersion: v1
kind: Service
metadata:
name: kubernetes-demo-service
spec:
selector:
app: kubernetes-demo
ports:
- protocol: TCP
port: 8080
targetPort: 8080
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kubernetes-demo-ingress
spec:
backend:
serviceName: kubernetes-demo-service
servicePort: 8080
I think if I use kubectl get ing it will show me not internal IP (such like 192.168.1.1) and I use it curl http://192.168.1.1:8080 can reach my back service. but its show me 10.0.2.15
I don't know why : (
thanks
Hi holy,
the reason is, that when you installed your minikube, the VM on the VirtualBox by default gives it two interfaces, eth0 with the IP of 10.0.2.15 (that is for the NATing the vm to the internet) and eth1 with the actual IP that the minikube VM has on the internal net, e.g. 192.168.1.1 (that is the host-only network). When you create the ingress, it always takes as the external IP the IP address of eth0 interface, which is 10.0.2.15. See also my earlier comment on this, with the output of the `ifconfig``command executed in the shell of the minikube image.
But when you give your Ingress definition a DNS name for your host, e.g. like this:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kubernetes-demo-ingress
spec:
rules:
- host: your-host-dns-name.local
http:
paths:
- path: /
backend:
serviceName: kubernetes-demo-service
servicePort: 8080
and put the DNS name that you chose in your /etc/hosts file like this:
192.168.1.1 your-host-dns-name.local
you should be able to reach your kubernetes-demo-service via the ingress controller by curling
http://your-host-dns-name.local:8080/
from your localhost. Don't wonder, that you can get it via the IP-address, only the host name works, because the ingress still has the wrong IP (10.0.2.15) in its configuration, but does not seem to care much about it.
Hi @tkautenburger Thank you so much. now I can request kubernetes-demo-service
First I am not change ingress.yaml and I use minikube ip command and it show me the public ip. and I think this IP is the VM ip(because I use minikube so just one Node and the vm public IP is the ingress public IP ). and I use this ip + port can reach my back service.
My thought is right?
Thanks again : )
Hi saga,
yes that is right. With minikube ipyou'll get the minikube's vm ip, 192.168.99.100 typically that is and because you did not specify a host nor a path in your ingress, the ingress resource takes the wildcard (*) and will forward any request to the IP of the minikube vm to your demo service. If you want your ingress resource to forward incoming requests to different services later one, you would have to provide host and path definitions in your ingress definition, like I did.
With your ingress definition you could for example try to curl http://192.168.99.100/any-url-path-will-do/and it should still hit your demo service, right? That is probably not what you want, at least not for a production service.
And from the moment on you use host and path definitions in your ingress resource, only the host name and path in your ingress will work when curling the service and not the minikube vm's IP address anymore, all other requests will only hit the default backend of the ingress controller. You can try that when you use my ingress definition and swap back and forth between yours and mine.
@tkautenburger Thanks again : )
Yeah I want a summary about this, so if I want use Ingress I should those step:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml
minikube addons enable ingress
=======This situation all request will call kubernetes-demo-service===
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kubernetes-demo-ingress
spec:
backend:
serviceName: kubernetes-demo-service
servicePort: 8080
=======This situation call the backend service separately===
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kubernetes-demo-ingress
spec:
rules:
- host: ingress.demo.kube
http:
paths:
- path: /serviceA
backend:
serviceName: serviceA -- name of service not application
servicePort: servicePort
- path: /serviceB
backend:
serviceName: serviceA
servicePort: servicePort
Above step is right ?
Now I will try use traefik : )
+1. anyone solved the issue without -host - /etc/hosts trick ?
Thanks !
Instead of editing your /hosts/etc, you can also make use of the free DNS-service nip.io.
So, in your ingress.yml you map your service on hostname 'foo.192.168.99.100.nip.io'.
Afterwards you point your browser to that URL and it will resolve to 192.168.99.100:xxxx and it will show your service/ingress.
Eg: my ingresses:
vagrant@ubuntu-xenial:~$ kubectl get ing
NAME HOSTS ADDRESS PORTS AGE
gitlab-minio minio.192.168.99.100.nip.io 10.0.2.15 80, 443 11h
gitlab-registry registry.192.168.99.100.nip.io 10.0.2.15 80, 443 11h
gitlab-unicorn gitlab.192.168.99.100.nip.io 10.0.2.15 80, 443 11h
This makes life already a bit easier.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
I am using ExternalDNS and I wanted my ingress addresses to reflect the ip address of my VM (as seen by my host).
I got this to work as desired by..
1) Enabling the ingress add-on.
2) Disabling the addon-manager so my changes don't get reverted.
3) Updating the deployment/nginx-ingress-controller by replacing --report-node-internal-ip-address with --publish-status-address 192.168.99.100.
It's not obvious to me how this could be implemented dynamically in minikube/deploy/addons/ingress/ingress-dp.yaml and hard coding the ip doesn't seem very portable.
I also facing same problem , minikube eth0 IP is 10.0.2.15 , and kubectl get ingress , command reports ingress resources IP as 10.0.2.15 , not the minikube VM IP , which is 192.168.99.100 , is there a way in minikube , so that minikube eth0 IP be 192.168.99.100 , and eth1 IP be 10.0.2.15
@andahme using your approach to fix the problem does not work, because as soon , we disable ingress on minikube , ingress controller and deployments gets removed. so what you mean by disabling addon-manager, can you give command for that.
@andahme I think I got ,how to disable addon manager , as that is also a addon for minikube
minikube addons disable addon-manager
thanks for help
Bumping for visibility, this is happening to me on Mac OS 10.14.4 Mojave with minikube v1.0.0
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/close
@tkautenburger: Closing this issue.
In response to this:
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
@andahme Not sure if that's the static IP you are talking about but Docker Desktop on windows does this in hosts file:
# Added by Docker Desktop
172.16.0.95 host.docker.internal
172.16.0.95 gateway.docker.internal
# To allow the same kube context to work on the host and the container:
127.0.0.1 kubernetes.docker.internal
# End of section
Most helpful comment
Instead of editing your /hosts/etc, you can also make use of the free DNS-service nip.io.
So, in your ingress.yml you map your service on hostname 'foo.192.168.99.100.nip.io'.
Afterwards you point your browser to that URL and it will resolve to 192.168.99.100:xxxx and it will show your service/ingress.
Eg: my ingresses:
This makes life already a bit easier.