Dashboard: Ingresses are a black box mystery.

Created on 8 Dec 2016  ·  5Comments  ·  Source: kubernetes/dashboard

Issue details

In the dashboard, we can create ingresses, but they are a total mystery. I've tried this on AWS and GKE, and both have been huge time-wasters.

The dashboard should have information about the Ingress, and a way to test and diagnose if the ingress is working. Pods and Containers have logs - there should be something like that for Ingresses. Otherwise we're just flipping switches.

Environment

Here are my two environments: AWS and GKE.

http://stackoverflow.com/questions/40745885/kubernetes-1-4-ssl-termination-on-aws
http://stackoverflow.com/questions/41043298/kubernetes-ingress-on-gke

Steps to reproduce

Desire to setup Ingresses
Read the documentation to use Kubernetes Ingresses
Start flipping switches
Tableflip

Observed result

An immense amount of time wasted.

Expected result

If Kubernetes is designed to deploy apps on cloud platforms, the dashboard should provide enough information on the Ingress state to identify and solve issues.

Comments

Maybe this is as simple as copy with a checklist, or a shell script that automates the check.

kinfeature lifecyclfrozen

Most helpful comment

Yeah. All I can say is that I agree... We just havent had the time to put more effort on this.

All 5 comments

I found these quota errors in the logs after asking on SO. It would be smart to put that in the dashboard.

☀  kubectl describe ingresses                                                          A1Mod master
Name:           all-ingress
Namespace:      default
Address:        35.186.216.14
Default backend:    default-http-backend:80 (10.0.1.4:8080)
TLS:
  tls-secret terminates admin-stage.example.com,dashboard-stage.example.com,expert-stage.example.com,signal-stage.example.com,stage.example.com
Rules:
  Host              Path    Backends
  ----              ----    --------
  admin-stage.example.com   
                    /   admin-service:http-port (<none>)
  dashboard-stage.example.com   
                    /   dashboard-service:http-port (<none>)
  expert-stage.example.com  
                    /   expert-service:http-port (<none>)
  signal-stage.example.com  
                    /   signal-service:http-port (<none>)
  stage.example.com     
                    /   www-service:http-port (<none>)
Annotations:
  url-map:          k8s-um-default-all-ingress--c0a017bf739118ea
  backends:         {"k8s-be-31309--c0a017bf739118ea":"Unknown"}
  forwarding-rule:      k8s-fw-default-all-ingress--c0a017bf739118ea
  https-forwarding-rule:    k8s-fws-default-all-ingress--c0a017bf739118ea
  https-target-proxy:       k8s-tps-default-all-ingress--c0a017bf739118ea
  static-ip:            k8s-fw-default-all-ingress--c0a017bf739118ea
  target-proxy:         k8s-tp-default-all-ingress--c0a017bf739118ea
Events:
  FirstSeen LastSeen    Count   From                SubobjectPath   Type        Reason      Message
  --------- --------    -----   ----                -------------   --------    ------      -------
  18h       17m     1196    {loadbalancer-controller }          Warning     GCE :Quota  googleapi: Error 403: Quota 'STATIC_ADDRESSES' exceeded. Limit: 1.0, quotaExceeded
  17m       1s      20  {loadbalancer-controller }          Warning     GCE :Quota  googleapi: Error 403: Quota 'BACKEND_SERVICES' exceeded. Limit: 5.0, quotaExceeded


Name:           button-ingress
Namespace:      default
Address:        35.186.220.220
Default backend:    default-http-backend:80 (10.0.1.4:8080)
TLS:
  tls-secret terminates button-stage.example.com
Rules:
  Host              Path    Backends
  ----              ----    --------
  button-stage.example.com  
                    /   button-service:http-port (<none>)
Annotations:
  url-map:      k8s-um-default-button-ingress--c0a017bf739118ea
  backends:     {"k8s-be-31309--c0a017bf739118ea":"UNHEALTHY","k8s-be-32698--c0a017bf739118ea":"UNHEALTHY"}
  forwarding-rule:  k8s-fw-default-button-ingress--c0a017bf739118ea
  ssl-redirect:     false
  target-proxy:     k8s-tp-default-button-ingress--c0a017bf739118ea
Events:
  FirstSeen LastSeen    Count   From                SubobjectPath   Type        Reason      Message
  --------- --------    -----   ----                -------------   --------    ------      -------
  18h       18m     1192    {loadbalancer-controller }          Warning     GCE :Quota  googleapi: Error 403: Quota 'STATIC_ADDRESSES' exceeded. Limit: 1.0, quotaExceeded
  17m       59s     23  {loadbalancer-controller }          Warning     GCE :Quota  googleapi: Error 403: Quota 'BACKEND_SERVICES' exceeded. Limit: 5.0, quotaExceeded

Yeah. All I can say is that I agree... We just havent had the time to put more effort on this.

I'm guessing that the solution would be something like we do for pods where we troll the events log for errors related to the Ingress and show some kind of status to the user?

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

Was this page helpful?
0 / 5 - 0 ratings

Related issues

puja108 picture puja108  ·  5Comments

donspaulding picture donspaulding  ·  5Comments

Fohlen picture Fohlen  ·  4Comments

maciaszczykm picture maciaszczykm  ·  3Comments

mhobotpplnet picture mhobotpplnet  ·  3Comments