When deploying Metrics on OpenShift Origin 3.9, I get errors that the following images are not found:
docker.io/openshift/origin-metrics-cassandra:v3.9.0
docker.io/openshift/origin-metrics-hawkular-metrics:v3.9.0
docker.io/openshift/origin-metrics-heapster:v3.9.0
I looked at docker hub and see that these images have a tag of v3.9 and not v3.9.0.
oc v3.9.0+ba7faec-1
kubernetes v1.9.1+a0ce1bc657
features: Basic-Auth GSSAPI Kerberos SPNEGO
Server https://xxxxx
openshift v3.9.0+ba7faec-1
kubernetes v1.9.1+a0ce1bc657
All the pods associated with metrics in an ImagePullBackOff status
hawkular-cassandra-1-c27b4 0/1 ImagePullBackOff 0 18m
hawkular-metrics-zsvkb 0/1 ImagePullBackOff 0 18m
heapster-78w7j 0/1 ImagePullBackOff 0 18m
All the pods are in running state
Events for heapster pod show:
Failed to pull image "docker.io/openshift/origin-metrics-heapster:v3.9.0": rpc error: code = Unknown desc = manifest for docker.io/openshift/origin-metrics-heapster:v3.9.0 not found
-e openshift_metrics_image_version=v3.9 will be ok
@smarterclayton looks like we are missing 3.9.0 image tags for metrics
I face the same error
same problem here
please tag it with v3.9.0
Any news? Should be trivial to add that tag for consistency?
Attempting to work around by
set openshift_image_tag=v3.9
in inventory fails with
fatal: [kube-sandbox-m02.uio.no]: FAILED! => {"msg": "last_checked_host: kube-sandbox-m02.uio.no, last_checked_var: openshift_image_tag;openshift_image_tag must be in the formatnv#.#.#[-optional.#]. Examples: v1.2.3, v3.5.1-alpha.1nYou specified openshift_image_tag=v3.9"}
please tag these images with v3.9.0 !
How come after all these months this is still not fixed???
Not in the playbooks, not in docker hub... Come on it can't be that hard..
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.
If this issue is safe to close now please do so with /close.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.
If this issue is safe to close now please do so with /close.
/lifecycle rotten
/remove-lifecycle stale
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.
/close
@openshift-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting
/reopen.
Mark the issue as fresh by commenting/remove-lifecycle rotten.
Exclude this issue from closing again by commenting/lifecycle frozen./close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Most helpful comment
same problem here
please tag it with v3.9.0