Hi:
I deploy a single ingress to provide grafana access from outside, the ingress is as follow:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: monitoring-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- http:
paths:
- path:
backend:
serviceName: monitoring-grafana
servicePort: 5432
So, when I get the ingress details, the IP address does not appear!!!!
kubectl get ing
NAME HOSTS ADDRESS PORTS AGE
coiling-toucan-monocular * 80 1d
monitoring-ingress * 80 13m
kubectl get ing monitoring-ingress -o yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
creationTimestamp: 2017-10-03T14:03:35Z
generation: 1
name: monitoring-ingress
namespace: default
resourceVersion: "929379"
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/monitoring-ingress
uid: a5b84800-a843-11e7-b4b3-005056b77c78
spec:
rules:
- http:
paths:
- backend:
serviceName: monitoring-grafana
servicePort: 5432
status:
loadBalancer: {}
Where is my IP address??? The other ingress running about 2 days also does not show the IP address!! What happend with this?
Have you created the nginx-ingress-controller pod? If missing, ingress won't work when you use the kubernetes.io/ingress.class: "nginx"
Yes and of course, I create the nginx ingress controller, the ingress work without problem but the IP are not showing!
If you are deploying it on GKE, it might be related to kubeadm/issues/425. There is a fix there that you can apply (if it is the case)
@PbTG thanks for your replt but no, is not my issue the reflected in the post that you send me, the details:
root@node1:/opt/kubernetes/monitoring# kubectl get pods |grep ingress-controller
exasperated-giraffe-nginx-ingress-controller-1779518062-nlv9k 1/1 Running 0 1d
root@node1:/opt/kubernetes/monitoring#
root@node1:/opt/kubernetes/monitoring# kubectl get ing coiling-toucan-monocular -o yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/rewrite-target: /
kubernetes.io/ingress.class: nginx
creationTimestamp: 2017-10-02T12:19:01Z
generation: 1
labels:
app: coiling-toucan-monocular
chart: monocular-0.4.9
heritage: Tiller
release: coiling-toucan
name: coiling-toucan-monocular
namespace: default
resourceVersion: "803894"
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/coiling-toucan-monocular
uid: df93a315-a76b-11e7-b4b3-005056b77c78
spec:
rules:
- http:
paths:
- backend:
serviceName: coiling-toucan-monocular-ui
servicePort: 80
path: /
- backend:
serviceName: coiling-toucan-monocular-api
servicePort: 80
path: /api/
status:
loadBalancer: {}
Ingress-controller work like a charm
root@node1:/opt/kubernetes/monitoring# kubectl get ing coiling-toucan-monocular
NAME HOSTS ADDRESS PORTS AGE
coiling-toucan-monocular * 80 1d
root@s-smartc2-zprei:/opt/kubernetes/monitoring#
root@s-smartc2-zprei:/opt/kubernetes/monitoring# kubectl get nodes
NAME STATUS AGE VERSION
node1 Ready 8d v1.7.5
node2 Ready 8d v1.7.5
node3 Ready 8d v1.7.5
So, is not problem with RBAC.
@felixPG please post the ingress pod log and the yaml you used to create the controller
@aledbf controller are created using the official helm charts: stable/nginx-ingress
The ingress work fine, I can access to the service behind the ingress without problem.
Logs from the controller:
172.17.12.164 - [172.17.12.164] - - [05/Oct/2017:07:48:25 +0000] "GET / HTTP/2.0" 200 1380 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36" 323 0.235 [default-eponymous-hedgehog-monocular-ui-80] 10.44.0.9:8080 1392 0.235 200
172.17.12.164 - [172.17.12.164] - - [05/Oct/2017:07:48:25 +0000] "GET /inline.bf62d3a805444c816e0d.bundle.js HTTP/2.0" 200 856 "https://192.168.133.4/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36" 47 0.106 [default-eponymous-hedgehog-monocular-ui-80] 10.36.0.17:8080 1460 0.106 200
172.17.12.164 - [172.17.12.164] - - [05/Oct/2017:07:48:25 +0000] "GET /polyfills.0cce62a42ca584988089.bundle.js HTTP/2.0" 200 39630 "https://192.168.133.4/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36" 49 0.106 [default-eponymous-hedgehog-monocular-ui-80] 10.36.0.17:8080 130352 0.106 200
172.17.12.164 - [172.17.12.164] - - [05/Oct/2017:07:48:25 +0000] "GET /styles.a22ef3d48e01a720fb8c.bundle.css HTTP/2.0" 200 7144 "https://192.168.133.4/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36" 46 0.106 [default-eponymous-hedgehog-monocular-ui-80] 10.44.0.9:8080 7157 0.106 200
172.17.12.164 - [172.17.12.164] - - [05/Oct/2017:07:48:25 +0000] "GET /assets/js/overrides.js HTTP/2.0" 200 160 "https://192.168.133.4/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36" 39 0.106 [default-eponymous-hedgehog-monocular-ui-80] 10.44.0.9:8080 160 0.106 200
172.17.12.164 - [172.17.12.164] - - [05/Oct/2017:07:48:25 +0000] "GET /vendor.d434de76ea23320bd67d.bundle.js HTTP/2.0" 200 288596 "https://192.168.133.4/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36" 47 0.106 [default-eponymous-hedgehog-monocular-ui-80] 10.44.0.9:8080 1274904 0.106 200
As you can see the ingress and the ingress controller work fine, but ingress does not show the IP address, details:
root@node1:/opt/kubernetes/nfs# kubectl get ing
NAME HOSTS ADDRESS PORTS AGE
eponymous-hedgehog-monocular * 80 21h
root@node1:/opt/kubernetes/nfs# kubectl describe ingress eponymous-hedgehog-monocular
Name: eponymous-hedgehog-monocular
Namespace: default
Address:
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
*
/ eponymous-hedgehog-monocular-ui:80 (<none>)
/api/ eponymous-hedgehog-monocular-api:80 (<none>)
Annotations:
rewrite-target: /
Events: <none>
root@node1:/opt/kubernetes/nfs# kubectl get ing eponymous-hedgehog-monocular -o yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/rewrite-target: /
kubernetes.io/ingress.class: nginx
creationTimestamp: 2017-10-04T10:44:32Z
generation: 1
labels:
app: eponymous-hedgehog-monocular
chart: monocular-0.4.9
heritage: Tiller
release: eponymous-hedgehog
name: eponymous-hedgehog-monocular
namespace: default
resourceVersion: "1031414"
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/eponymous-hedgehog-monocular
uid: 01a4c539-a8f1-11e7-b4b3-005056b77c78
spec:
rules:
- http:
paths:
- backend:
serviceName: eponymous-hedgehog-monocular-ui
servicePort: 80
path: /
- backend:
serviceName: eponymous-hedgehog-monocular-api
servicePort: 80
path: /api/
status:
loadBalancer: {}
root@node1:/opt/kubernetes/nfs#
root@node1:/opt/kubernetes/nfs# kubectl get pods solitary-stingray-nginx-ingress-controller-2907604817-c5qsx -o yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
checksum/config: 7433c18b6f142daf093b0c7dbbde2fb975e5e84c39540fcde10ba9d7135ab14d
kubernetes.io/created-by: |
{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"solitary-stingray-nginx-ingress-controller-2907604817","uid":"df8645c2-a8f0-11e7-b4b3-005056b77c78","apiVersion":"extensions","resourceVersion":"1031256"}}
creationTimestamp: 2017-10-04T10:43:35Z
generateName: solitary-stingray-nginx-ingress-controller-2907604817-
labels:
app: nginx-ingress
component: controller
pod-template-hash: "2907604817"
release: solitary-stingray
name: solitary-stingray-nginx-ingress-controller-2907604817-c5qsx
namespace: default
ownerReferences:
- apiVersion: extensions/v1beta1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: solitary-stingray-nginx-ingress-controller-2907604817
uid: df8645c2-a8f0-11e7-b4b3-005056b77c78
resourceVersion: "1031309"
selfLink: /api/v1/namespaces/default/pods/solitary-stingray-nginx-ingress-controller-2907604817-c5qsx
uid: df8b15ce-a8f0-11e7-b4b3-005056b77c78
spec:
containers:
- args:
- /nginx-ingress-controller
- --default-backend-service=default/solitary-stingray-nginx-ingress-default-backend
- --election-id=ingress-controller-leader
- --ingress-class=nginx
- --configmap=default/solitary-stingray-nginx-ingress-controller
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.12
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: nginx-ingress-controller
ports:
- containerPort: 80
hostPort: 80
name: http
protocol: TCP
- containerPort: 443
hostPort: 443
name: https
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-s2w4x
readOnly: true
dnsPolicy: ClusterFirst
hostNetwork: true
nodeName: s-smartc4-zprei
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 60
tolerations:
- effect: NoExecute
key: node.alpha.kubernetes.io/notReady
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.alpha.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: default-token-s2w4x
secret:
defaultMode: 420
secretName: default-token-s2w4x
status:
conditions:
- lastProbeTime: null
lastTransitionTime: 2017-10-04T10:43:35Z
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: 2017-10-04T10:43:41Z
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: 2017-10-04T10:43:35Z
status: "True"
type: PodScheduled
containerStatuses:
- containerID: docker://0a999b72e800f866d9c030d5e27c98706625f250e9a9a56016bd1a0264d3a177
image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.12
imageID: docker-pullable://gcr.io/google_containers/nginx-ingress-controller@sha256:77c5aec7edd2d5baae6c184e83ea07512b9609874c2755c09f74ee0319820a7e
lastState: {}
name: nginx-ingress-controller
ready: true
restartCount: 0
state:
running:
startedAt: 2017-10-04T10:43:36Z
hostIP: 192.168.133.4
phase: Running
podIP: 192.168.133.4
qosClass: BestEffort
startTime: 2017-10-04T10:43:35Z
root@node1:/opt/kubernetes/nfs#
As you can see, the only way that I have to get the IP is with the output of yaml or with the output of kubectl describe pods pod_name |grep IP, also giving a fixed IP address to the controller but this IS NOT the idea.
root@node1:/opt/kubernetes/nfs# kubectl describe pods solitary-stingray-nginx-ingress-controller-2907604817-c5qsx |grep IP
IP: 192.168.133.4
root@node1:/opt/kubernetes/nfs#
Definitely is a bug, but is a PAIN!!!! Any ideas why this happend?
@felixPG please update the image to gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.14
Thanks @aledbf , the beta.12 is the tag by default that nginx-ingress have now, I will try, but does not appear that the problem are with version of nginx-ingress, I think that is a kubernetes bug.
@felixPG I still have same issue as yours, I think it should be the Kubernetes bug, it is using the service external IP for the ingress address column, so here there is no IP because your service behind the ingress does not include the external IP.
@yuyangbj is the ingress IP, is not matter what type of IP! Is a kubernetes bug!
@felixPG what is your network plugin you used?
@yuyangbj weave
Per my understanding, the weave should monitor the ingress event and do some magic here, maybe you missed some annotations to ask weave to find the ingress controller POD?
@felixPG if you are running in the cloud you need to add the flag --publish-service=<ns/ingress svc> to point to a service type=LoadBalancer. That will populate the IP status in the ingress rules.
Please check the deploy guide https://github.com/kubernetes/ingress-nginx/tree/master/deploy#installation-guide
Thanks for your reply @aledbf, Im using helm chart to install the ingress with RBAC, using baremetal cluster up with kubeadm. I will try the "manual" option and reply with a feedback about.
@yuyangbj annotations only needed when you have more than 1 ingress, in my case and for test purpose I have only 1 ingress and 1 ingress controller.
@felixPG, I have the same issuse with you. Do you find a solution to fix it?
@hongchaodeng the trick that I use is create a service for the ingress controller with an external IPs.
Regards
Felix
@felixPG, Thanks for your kindly reply!
I have similar issue
I post my question to https://groups.google.com/forum/#!topic/kubernetes-users/10NEhhqcPm4 but no answer at this moment.
Can anyone check it?
@RouR create a service for the ingress controller with an external IPs.
Regards
@felixPG
Ok, I create a service for the ingress controller with an external IP.
https://github.com/RouR/ToDo-ToBuy/blob/c1019a8d130e9940906651ac817bf61501f1274e/k8s/dev/nginx.yaml#L167
My web service is opened, cool! But
> kubectl get services --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 1d
dev web-service ClusterIP 10.111.22.32 <none> 80/TCP 1d
ingress-nginx default-http-backend ClusterIP 10.110.232.9 <none> 80/TCP 1d
ingress-nginx ingress-nginx NodePort 10.110.64.203 192.168.99.100 80:31824/TCP,443:31181/TCP 1d
kube-system default-http-backend NodePort 10.108.203.14 <none> 80:30001/TCP 3m
kube-system heapster ClusterIP 10.111.139.88 <none> 80/TCP 1d
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 1d
kube-system kubernetes-dashboard NodePort 10.98.208.102 <none> 80:30000/TCP 1d
kube-system monitoring-grafana NodePort 10.110.37.141 <none> 8011:30700/TCP 1d
kube-system monitoring-influxdb ClusterIP 10.100.177.72 <none> 8086/TCP 1d
> kubectl get --all-namespaces ing -o wide
NAMESPACE NAME HOSTS ADDRESS PORTS AGE
dev web-ingress * 80 1d
What`s going on with ingress address?
@RouR as a workaround I say you a trick for a service with external IP in orther that you know the IP of the ingress without a big effort, of course the problem persist and you can not see the IP of the ingress with kubectl get ing.
You are exposing the ingress as nodeport too, and each Node will proxy that port, so, as recomendation only remain externalIP and remove nodeport, for me is more convenient, but depend of your needed ;-)
Regards
Thank`s for the help.
If this is an workaround , then maybe we need to reopen this issue?
Maybe, but his issue does not advance more than a few comments without a real solution.
I'm doing the similar loadbalancer deployment locally without using NodePort for the svc, the website never opens, still wondering how to solve.
root@node3:/media# kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default-http-backend ClusterIP 10.105.172.33 <none> 80/TCP 1h
ingress-nginx LoadBalancer 10.106.24.26 192.168.0.43 80:30816/TCP,443:30619/TCP 1h
root@node3:/media# kubectl get ing
NAME HOSTS ADDRESS PORTS AGE
cafe-ingress-nginx cafe.example.com 80 16m
root@node3:/media# curl -v cafe.example.com/healthz -k
* Trying 192.168.0.43...
* TCP_NODELAY set
* Connected to cafe.example.com (192.168.0.43) port 80 (#0)
> GET /healthz HTTP/1.1
> Host: cafe.example.com
> User-Agent: curl/7.52.1
> Accept: */*
>
< HTTP/1.1 200 OK
< Server: nginx/1.13.7
< Date: Sat, 20 Jan 2018 17:20:51 GMT
< Content-Type: text/plain; charset=utf-8
< Content-Length: 2
< Connection: keep-alive
<
* Curl_http_done: called premature == 0
* Connection #0 to host cafe.example.com left intact
ok
root@node3:/media# curl -v cafe.example.com/coffee -k
* Trying 192.168.0.43...
* TCP_NODELAY set
* Connected to cafe.example.com (192.168.0.43) port 80 (#0)
> GET /coffee HTTP/1.1
> Host: cafe.example.com
> User-Agent: curl/7.52.1
> Accept: */*
>
< HTTP/1.1 503 Service Temporarily Unavailable
< Server: nginx/1.13.7
< Date: Sat, 20 Jan 2018 17:20:58 GMT
< Content-Type: text/html
< Content-Length: 213
< Connection: keep-alive
<
<html>
<head><title>503 Service Temporarily Unavailable</title></head>
<body bgcolor="white">
<center><h1>503 Service Temporarily Unavailable</h1></center>
<hr><center>nginx/1.13.7</center>
</body>
</html>
* Curl_http_done: called premature == 0
* Connection #0 to host cafe.example.com left intact
I am new to Ingress. My Ingress file is :
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: fanout-nginx-ingress
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: nginx
servicePort: 80
- path: /echo
backend:
serviceName: echoserver
servicePort: 8080
My 2 services are :
[root@node1 kubernetes]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
echoserver NodePort 10.233.48.121 <none> 8080:31250/TCP 8m
nginx NodePort 10.233.44.54 <none> 80:32018/TCP 23m
My ingress describe command shows me :
[root@node1 kubernetes]# kubectl describe ing fanout-nginx-ingress
Name: fanout-nginx-ingress
Namespace: development
Address:
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
*
/ nginx:80 (<none>)
/echo echoserver:8080 (<none>)
Annotations:
Events: <none>
All this is deployed on CentOS VMs and am using Kubernetes version 1.9.5
But I am not getting an EXTERNAL-IP or an ing ADDRESS.
@jeunii my workaround was create a service for the ingres controller with externalIP
@felixPG thank you for your reply. Since I am new to Ingress on K8S I have just one question. Does the Ingress controller and the apps I want controlled by the Ingress controller have to be in the same name space ? Could you recommend me a good doc that can help me understand this concept ?
It is because you deployed ingress service with NodePort type ( which is the default type with kubeadm on BareMetal by official). To get ADDRESS value printed, you should change the NodePort type to ExternalIP. Check https://zhuanlan.zhihu.com/p/41071320 for the detail.
On Azure AKS, I got this error because I created the cluster through the portal and HTTP application routing add-on was enabled by default.
Most helpful comment
@felixPG
Ok, I create a service for the ingress controller with an external IP.
https://github.com/RouR/ToDo-ToBuy/blob/c1019a8d130e9940906651ac817bf61501f1274e/k8s/dev/nginx.yaml#L167
My web service is opened, cool! But
What`s going on with ingress address?