when intalling this char in Kube 1.16, its throwing this error message.
helm install --name prometheus stable/prometheus-operator
Error: validation failed: [unable to recognize "": no matches for kind "PodSecurityPolicy" in version "extensions/v1beta1", unable to recognize "": no matches for kind "DaemonSet" in version "extensions/v1beta1", unable to recognize "": no matches for kind "Deployment" in version "apps/v1beta2"]
Thanks
I have the same problem for prometheus installation and some other projects using helm.
From what I understand the error is occurring due to the Kuberntes API change, as can be seen at the link below:
https://kubernetes.io/blog/2019/07/18/api-deprecations-in-1-16/
Because what was apiVersion: v1, was now configured as apps / v1
To fix this you have to change the Charts or see if there is a backward compatibility, which I still can't find.
I solved this issue on Minikube. Downgrade your cluster from 1.16.0 to 1.15.4. Using k8s 1.15.4 and Helm v3.0.0-beta.3 I was able to install "Stable/Prometheus". It worked on Minikube and still i can't say anything about cluster based on kubeadm or any other cluster.
I got the same shit here...
I got the same shit here...
There are Pull Requests like the one @lookbeat linked in the works, but getting things merged it proving to be pretty slow.
you could hold off on installing kube 1.16/downgrade back to the latest 1.15.x
host the chart yourself with the fixes
or another quick fix would be to do a helm install --dry-run to have it generate the yaml's for you. then simply update the yamls with the appropriate apiVersions
We also facing this problem on kubernetes 1.16.0 and helm 3 (beta 4.0). We're try to create CRD by ourself. Anyway many obsolete api overthere.

Anyone else seeing that none of the rules will apply either? Or grafana dashboards?
{{- $kubeTargetVersion := default .Capabilities.KubeVersion.GitVersion .Values .kubeTargetVersionOverride }}
{{- if and (semverCompare ">=1.14.0-0" $kubeTargetVersion) (semverCompare "<1. 16.0-0" $kubeTargetVersion) .Values.grafana.enabled .Values.grafana.defaultDashboardsEnabled }}
Guess we have to do a target version override?
Yeah, I have --set kubeTargetVersionOverride="1.15.999" in the mean time
@fcuello-fudo et al.
I suppose from the perspective of convention, should the upper bounding check even be in place until it is otherwise determined that there are issues? Or was this a purposeful decision due to some change in 1.16.0? I should read more perhaps?
should the upper bounding check even be in place until it is otherwise determined that there are issues?
I have no idea where this check came from, but I agree with this ^
Can someone please test if this is fixed by https://github.com/helm/charts/pull/18721 and report back?
Can someone please test if this is fixed by #18721 and report back?
It seems to be fixed, I tested on a 1.16.2 cluster.
[thomas@master01 monitoring]$ helm3 install -n monitoring -f config.yaml global-monitoring stable/prometheus-operator
manifest_sorter.go:175: info: skipping unknown hook: "crd-install"
manifest_sorter.go:175: info: skipping unknown hook: "crd-install"
manifest_sorter.go:175: info: skipping unknown hook: "crd-install"
manifest_sorter.go:175: info: skipping unknown hook: "crd-install"
manifest_sorter.go:175: info: skipping unknown hook: "crd-install"
NAME: global-monitoring
LAST DEPLOYED: Wed Nov 20 18:08:08 2019
NAMESPACE: monitoring
STATUS: deployed
REVISION: 1
NOTES:
The Prometheus Operator has been installed. Check its status by running:
kubectl --namespace monitoring get pods -l "release=global-monitoring"
Visit https://github.com/coreos/prometheus-operator for instructions on how
to create & configure Alertmanager and Prometheus instances using the Operator.
[thomas@master01 monitoring]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master01.k8s.lemarchand.io Ready master 110d v1.16.2
master02.k8s.lemarchand.io Ready master 110d v1.16.2
master03.k8s.lemarchand.io Ready master 110d v1.16.2
worker01.k8s.lemarchand.io Ready <none> 32d v1.16.2
worker02.k8s.lemarchand.io Ready <none> 110d v1.16.2
worker03.k8s.lemarchand.io Ready <none> 110d v1.16.2
1.16.0 here working as well
I was able to deploy to a v1.16.1 cluster with helm 3. I did get the manifest_sorter.go:175: info: skipping unknown hook: "crd-install" messages. I'm assuming those are safe to ignore?
Same warning here:
helm install prometheus-operator stable/prometheus-operator --namespace=monitoring
manifest_sorter.go:175: info: skipping unknown hook: "crd-install"
manifest_sorter.go:175: info: skipping unknown hook: "crd-install"
manifest_sorter.go:175: info: skipping unknown hook: "crd-install"
manifest_sorter.go:175: info: skipping unknown hook: "crd-install"
manifest_sorter.go:175: info: skipping unknown hook: "crd-install"
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.3", GitCommit:"b3cbbae08ec52a7fc73d334838e18d17e8512749", GitTreeState:"clean", BuildDate:"2019-11-13T11:23:11Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.3", GitCommit:"b3cbbae08ec52a7fc73d334838e18d17e8512749", GitTreeState:"clean", BuildDate:"2019-11-13T11:13:49Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
$ helm install prometheusoperator stable/prometheus-operator -n prometheus
manifest_sorter.go:175: info: skipping unknown hook: "crd-install"
manifest_sorter.go:175: info: skipping unknown hook: "crd-install"
manifest_sorter.go:175: info: skipping unknown hook: "crd-install"
manifest_sorter.go:175: info: skipping unknown hook: "crd-install"
manifest_sorter.go:175: info: skipping unknown hook: "crd-install"
Error: Internal error occurred: failed calling webhook "prometheusrulemutate.monitoring.coreos.com": Post https://prometheus-prometheus-oper-operator.prometheus.svc:443/admission-prometheusrules/mutate?timeout=30s: service "prometheus-prometheus-oper-operator" not found
$
I have confirmed that installation works with following caveats ;
$ helm install prometheus stable/prometheus-operator -n prometheus -v 5
manifest_sorter.go:175: info: skipping unknown hook: "crd-install"
manifest_sorter.go:175: info: skipping unknown hook: "crd-install"
manifest_sorter.go:175: info: skipping unknown hook: "crd-install"
manifest_sorter.go:175: info: skipping unknown hook: "crd-install"
manifest_sorter.go:175: info: skipping unknown hook: "crd-install"
NAME: prometheus
LAST DEPLOYED: Fri Dec 6 19:13:57 2019
NAMESPACE: prometheus
STATUS: deployed
REVISION: 1
NOTES:
The Prometheus Operator has been installed. Check its status by running:
kubectl --namespace prometheus get pods -l "release=prometheus"
Visit https://github.com/coreos/prometheus-operator for instructions on how
to create & configure Alertmanager and Prometheus instances using the Operator.
Installation was failing for me because I had made several attempts to install and it left resources not deleted after previous attempt of installation.
Using verbosity level 5, I was able to see old/stale resources
Then the cleanup involved for me was ;
kubectl delete crd alertmanagers.monitoring.coreos.com podmonitors.monitoring.coreos.com prometheuses.monitoring.coreos.com prometheusrules.monitoring.coreos.com servicemonitors.monitoring.coreos.com
kubectl delete ns prometheus
for ps in `kubectl get podsecuritypolicies.policy | grep prometheus | awk '{print$1}' ` ; do kubectl delete podsecuritypolicies.policy $ps ; done
for ps in `kubectl get clusterrole | grep prometheus | awk '{print$1}' ` ; do kubectl delete podsecuritypolicies.policy $ps ; done
for ps in `kubectl get clusterrole | grep prometheus | awk '{print$1}' ` ; do kubectl delete clusterrole $ps ; done
for ps in `kubectl get clusterrolebinding | grep prometheus | awk '{print$1}' ` ; do kubectl delete clusterrolebinding $ps ; done
for ps in `kubectl get service -n kube-system | grep prometheus | awk '{print$1}' ` ; do kubectl -n kube-system delete service $ps ; done
kubectl delete validatingwebhookconfigurations.admissionregistration.k8s.io prometheus-prometheus-oper-admission
I can login to grafana and see graphs and data at least on the outofbox dashboard for cluster
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.
This issue is being automatically closed due to inactivity.
Can this be reopened or is this issue tracked somewhere else?
Still getting this warning with v8.7.0 chart and helm v3.0.3:
$ helm install prometheus-operator stable/prometheus-operator --namespace monitoring --version 8.7.0 --atomic
manifest_sorter.go:175: info: skipping unknown hook: "crd-install"
manifest_sorter.go:175: info: skipping unknown hook: "crd-install"
manifest_sorter.go:175: info: skipping unknown hook: "crd-install"
manifest_sorter.go:175: info: skipping unknown hook: "crd-install"
manifest_sorter.go:175: info: skipping unknown hook: "crd-install"
➜ kubectl version
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.2", GitCommit:"59603c6e503c87169aea6106f57b9f242f64df89", GitTreeState:"archive", BuildDate:"2020-01-25T21:52:51Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.7", GitCommit:"be3d344ed06bff7a4fc60656200a93c74f31f9a4", GitTreeState:"clean", BuildDate:"2020-02-11T19:24:46Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"}
➜ helm version
version.BuildInfo{Version:"v3.0.2", GitCommit:"19e47ee3283ae98139d98460de796c1be1e3975f", GitTreeState:"clean", GoVersion:"go1.13.5"}
Installed successfully but still get the warning
➜ helm install jizu-promethues-operator stable/prometheus-operator --version 8.7.0 --namespace jizu-monitoring
manifest_sorter.go:175: info: skipping unknown hook: "crd-install"
manifest_sorter.go:175: info: skipping unknown hook: "crd-install"
manifest_sorter.go:175: info: skipping unknown hook: "crd-install"
manifest_sorter.go:175: info: skipping unknown hook: "crd-install"
manifest_sorter.go:175: info: skipping unknown hook: "crd-install"
NAME: jizu-promethues-operator
LAST DEPLOYED: Mon Feb 17 15:44:35 2020
NAMESPACE: jizu-monitoring
STATUS: deployed
REVISION: 1
NOTES:
The Prometheus Operator has been installed. Check its status by running:
kubectl --namespace jizu-monitoring get pods -l "release=jizu-promethues-operator"
Visit https://github.com/coreos/prometheus-operator for instructions on how
to create & configure Alertmanager and Prometheus instances using the Operator.
Any fix for this issue with helm 3.0.3 ?
It doesn't seem to be fixed in 8.13.8 either.
Any updates? Getting same error installing with helm
Any updates? Getting same error installing with helm
Same here
Read the latest document at https://github.com/helm/charts/tree/master/stable/prometheus-operator#helm-fails-to-create-crds
I'm using k8s v1.17.9 , helm 3 and It works !
kubectl -n monitoring apply -f https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.41/example/prometheus-operator-crd/monitoring.coreos.com_prometheuses.yaml
kubectl -n monitoring apply -f https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.41/example/prometheus-operator-crd/monitoring.coreos.com_podmonitors.yaml
kubectl -n monitoring apply -f https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.41/example/prometheus-operator-crd/monitoring.coreos.com_prometheusrules.yaml
kubectl -n monitoring apply -f https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.41/example/prometheus-operator-crd/monitoring.coreos.com_servicemonitors.yaml
kubectl -n monitoring apply -f https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.41/example/prometheus-operator-crd/monitoring.coreos.com_thanosrulers.yaml
helm -n monitoring install prometheus-operator stable/prometheus-operator --set prometheusOperator.createCustomResource=false
📣 Please note: stable/prometheus-operator is being moved to a new helm repo in the prometheus-community GitHub org, and renamed. See https://github.com/prometheus-community/community/issues/28 for context, and stay tuned! 😄
Most helpful comment
Still getting this warning with
v8.7.0chart and helmv3.0.3: