Hi
I have followed this guide to push an image into the OpenShift internal registry. I am able to login to the registry however I cannot push.
docker push docker-registry-default.10.28.102.29.nip.io/pushed/myimage:latest
The push refers to a repository [docker-registry-default.10.28.102.29.nip.io/pushed/myimage]
\n \n\n \n \n \n
f999ae22f308: Retrying in 1 second
Error: Status 503 trying to push repository pushed/myimage: "\n\n\n \n\n"Application is not available
\nThe application is currently not serving requests at this endpoint. It may not have been started or is still starting.
\n\n\n\n\n Possible reasons you are seeing this page:\n
\n\n
\n- \n The host doesn't exist.\n Make sure the hostname was typed correctly and that a route matching this hostname exists.\n
\n- \n The host exists, but doesn't have a matching path.\n Check if the URL path was typed correctly and that the route was created using the desired path.\n
\n- \n Route and path matches, but all pods are down.\n Make sure that the resources exposed by this route (pods, services, deployment configs, etc) have at least one pod running.\n
\n
oc version
oc v3.6.1+008f2d5
kubernetes v1.6.1+5115d708d7
features: Basic-Auth GSSAPI Kerberos SPNEGO
Server https://10.28.102.29:8443
openshift v3.6.1+008f2d5
kubernetes v1.6.1+5115d708d7
openshift version
openshift v3.6.1+008f2d5
kubernetes v1.6.1+5115d708d7
etcd 3.2.1
oc cluster up --public-hostname 10.28.102.29 --host-data-dir=/opt/openshift/data/ --host-config-dir=/opt/openshift/config/ --use-existing-config --http-proxy=http://USER:PASS@PROXY:8080 --https-proxy=http://USER:PASS@PROXY:8080 --no-proxy=172.30.1.1
oc create serviceaccount pusher
oc policy add-role-to-user system:image-builder pusher
oc create -f - <<API
apiVersion: v1
kind: ImageStream
metadata:
annotations:
description: Keeps track of changes in the application image
name: myimage
API
docker login -u pusher -p @&@&@ docker-registry-default.10.28.102.29.nip.io
docker tag f2a91732366c docker-registry-default.10.28.102.29.nip.io/pushed/myimage:latest
docker push docker-registry-default.172.28.102.29.nip.io/pushed/myimage:latest
The push refers to a repository [docker-registry-default.10.28.102.29.nip.io/pushed/myimage]
\n \n\n \n \n \n
f999ae22f308: Retrying in 1 second
Error: Status 503 trying to push repository pushed/myimage: "\n\n\n \n\n"Application is not available
\nThe application is currently not serving requests at this endpoint. It may not have been started or is still starting.
\n\n\n\n\n Possible reasons you are seeing this page:\n
\n\n
\n- \n The host doesn't exist.\n Make sure the hostname was typed correctly and that a route matching this hostname exists.\n
\n- \n The host exists, but doesn't have a matching path.\n Check if the URL path was typed correctly and that the route was created using the desired path.\n
\n- \n Route and path matches, but all pods are down.\n Make sure that the resources exposed by this route (pods, services, deployment configs, etc) have at least one pod running.\n
\n
images pushed successfully
oc status
In project default on server https://10.28.102.29:8443
http://docker-registry-default.10.28.102.29.nip.io to pod port 5000-tcp (svc/docker-registry)
dc/docker-registry deploys docker.io/openshift/origin-docker-registry:v3.6.1
deployment #1 deployed 4 hours ago - 1 pod
svc/kubernetes - 10.30.0.1 ports 443->8443, 53->8053, 53->8053
svc/router - 10.30.177.62 ports 80, 443, 1936
dc/router deploys docker.io/openshift/origin-haproxy-router:v3.6.1
deployment #1 deployed 4 hours ago - 1 pod
View details with 'oc describe
[if you are reporting issue related to builds, provide build logs with BUILD_LOGLEVEL=5]
[consider attaching output of the $ oc get all -o json -n <namespace> command to the issue]
[visit https://docs.openshift.org/latest/welcome/index.html]
/assign bparees
/priority P1
@mfojtik why p1?
Seems like your registry pod is not running. Please confirm it is running and gather the logs from it.
The registry pod is running
docker-registry-1-2tp44 1/1 Running 0 1d
logs
172.17.0.1 - - [06/Dec/2017:08:30:26 +0000] "GET /healthz HTTP/1.1" 200 0 "" "Go-http-client/1.1"
172.17.0.1 - - [06/Dec/2017:08:30:26 +0000] "GET /healthz HTTP/1.1" 200 0 "" "Go-http-client/1.1"
172.17.0.1 - - [06/Dec/2017:08:30:36 +0000] "GET /healthz HTTP/1.1" 200 0 "" "Go-http-client/1.1"
172.17.0.1 - - [06/Dec/2017:08:30:36 +0000] "GET /healthz HTTP/1.1" 200 0 "" "Go-http-client/1.1"
172.17.0.1 - - [06/Dec/2017:08:30:46 +0000] "GET /healthz HTTP/1.1" 200 0 "" "Go-http-client/1.1"
172.17.0.1 - - [06/Dec/2017:08:30:46 +0000] "GET /healthz HTTP/1.1" 200 0 "" "Go-http-client/1.1"
172.17.0.1 - - [06/Dec/2017:08:30:56 +0000] "GET /healthz HTTP/1.1" 200 0 "" "Go-http-client/1.1"
172.17.0.1 - - [06/Dec/2017:08:30:56 +0000] "GET /healthz HTTP/1.1" 200 0 "" "Go-http-client/1.1"
172.17.0.1 - - [06/Dec/2017:08:31:06 +0000] "GET /healthz HTTP/1.1" 200 0 "" "Go-http-client/1.1"
172.17.0.1 - - [06/Dec/2017:08:31:06 +0000] "GET /healthz HTTP/1.1" 200 0 "" "Go-http-client/1.1"
172.17.0.1 - - [06/Dec/2017:08:31:16 +0000] "GET /healthz HTTP/1.1" 200 0 "" "Go-http-client/1.1"
172.17.0.1 - - [06/Dec/2017:08:31:16 +0000] "GET /healthz HTTP/1.1" 200 0 "" "Go-http-client/1.1"
172.17.0.1 - - [06/Dec/2017:08:31:26 +0000] "GET /healthz HTTP/1.1" 200 0 "" "Go-http-client/1.1"
172.17.0.1 - - [06/Dec/2017:08:31:26 +0000] "GET /healthz HTTP/1.1" 200 0 "" "Go-http-client/1.1"
172.17.0.1 - - [06/Dec/2017:08:31:36 +0000] "GET /healthz HTTP/1.1" 200 0 "" "Go-http-client/1.1"
172.17.0.1 - - [06/Dec/2017:08:31:36 +0000] "GET /healthz HTTP/1.1" 200 0 "" "Go-http-client/1.1"
172.17.0.1 - - [06/Dec/2017:08:31:46 +0000] "GET /healthz HTTP/1.1" 200 0 "" "Go-http-client/1.1"
172.17.0.1 - - [06/Dec/2017:08:31:46 +0000] "GET /healthz HTTP/1.1" 200 0 "" "Go-http-client/1.1"
I had the same issue. Turned out that it was because my load balancer (AWS ALB) in front of the routers doesn't pass TLS SNI headers. Since the registry and console is using a passthrough route the routers only look at the SNI header.
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.
If this issue is safe to close now please do so with /close.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.
If this issue is safe to close now please do so with /close.
/lifecycle rotten
/remove-lifecycle stale
I'm facing same issue right now, as well as #12863 . It seems this bug is closed over and over without providing a real solution.
I'm facing same issue right now, as well as #12863 . It seems this bug is closed over and over without providing a real solution.
because every person who hits this is hitting it due to unique configuration issues in their environment, there is no generalized problem w/ pushing images to the internal registry (it's a fundamental feature of openshift, it is well tested and works consistently), and if the person hitting the issue isn't responsive to our queries for more information to help resolve it, there is nothing else we can do to help them.
Please open your own issue describing your cluster configuration, error you are hitting, and providing registry logs, if you would like assistance.
The comment above about TLS SNI was key for me... I had same issue, https traffic was being terminated and then re-encrypted by a loadbalancer... I made the LB pass traffic through, without terminating ssl, and the issue is gone! Am sure this documented somewhere? that kube needs TLS SNI to identify the hostnames and be able to route traffic. Thank you!
@lrhazi https://docs.openshift.org/latest/dev_guide/expose_service/expose_internal_ip_router.html#overview
"A router is configured to accept external requests and proxy them based on the configured routes. This is limited to HTTP/HTTPS(SNI)/TLS(SNI), which covers web applications."
Rotten issues close after 30d of inactivity.
Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.
/close