Describe the bug
Ingress specifications like
prometheus/alertmanager/grafana:
ingress:
enabled: true
neither cause an ingress object to be created nor a useful feedback about the failure
Version of Helm and Kubernetes:
3.0.0 and GKE 1.13.11-gke.14 and 1.14.8-gke.17
Which chart:
stable/prometheus-operator
What happened:
Installed the chart with helm install monitoring stable/prometheus-operator --namespace=monitoring --wait --timeout 10m --set prometheusOperator.admissionWebhooks.enabled=false --set prometheusOperator.tlsProxy.enabled=false (added --set prometheusOperator.admissionWebhooks.enabled=false due to GKE limitations explained in README and https://github.com/helm/charts/issues/13976.
I needed to run
helm install monitoring stable/prometheus-operator --namespace=monitoring --wait --timeout 10m --set prometheusOperator.admissionWebhooks.enabled=false --set prometheusOperator.tlsProxy.enabled=false
|| ( sleep 10; helm install monitoring stable/prometheus-operator --namespace=monitoring --wait --timeout 10m --set prometheusOperator.admissionWebhooks.enabled=false --set prometheusOperator.tlsProxy.enabled=false )
|| ( sleep 10; helm install monitoring stable/prometheus-operator --namespace=monitoring --wait --timeout 10m --set prometheusOperator.admissionWebhooks.enabled=false --set prometheusOperator.tlsProxy.enabled=false )
|| ( sleep 10; helm install monitoring stable/prometheus-operator --namespace=monitoring --wait --timeout 10m --set prometheusOperator.admissionWebhooks.enabled=false --set prometheusOperator.tlsProxy.enabled=false )
|| ( sleep 10; helm install monitoring stable/prometheus-operator --namespace=monitoring --wait --timeout 10m --set prometheusOperator.admissionWebhooks.enabled=false --set prometheusOperator.tlsProxy.enabled=false )
in order to compensate repeated failure due to Error: could not get apiVersions from Kubernetes: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request.
monitoring-values.yml:
prometheusOperator:
cleanupCustomResourceBeforeInstall: true
prometheus:
ingress:
enabled: true
hosts:
- example.com
paths:
- /monitoring/prometheus/?(.*)
prometheusSpec:
externalUrl: http://example.com/monitoring/prometheus/
alertmanager:
ingress:
enabled: true
hosts:
- example.com
paths:
- /monitoring/alertmanager/?(.*)
alertmanagerSpec:
externalUrl: http://example.com/monitoring/alertmanager/
grafana:
ingress:
enabled: true
hosts:
- example.com
path: /monitoring/grafana/?(.*)
grafana.ini:
server:
root_url: http://example.com/monitoring/grafana/
What you expected to happen:
Ingress objects to be created in the cluster for prometheus, alertmanager and grafana as specified.
How to reproduce it (as minimally and precisely as possible):
The commands in the description.
Anything else we need to know:
./.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.
This issue is being automatically closed due to inactivity.
same error - Enabled true in grafana and getting the same issue.
ingress not created
I am also getting the same issue, the only alert manager is in ingresses
โฏ kubectl get ingresses.extensions
NAME HOSTS ADDRESS PORTS AGE
prometheus-operator-alertmanager alert-monitoring.example.com 192.168.0.21 80, 443 36m
Most helpful comment
same error - Enabled true in grafana and getting the same issue.
ingress not created