Origin: Getting started example of tagging is not working

Created on 5 Apr 2016  路  4Comments  路  Source: openshift/origin

Following the steps given at https://docs.openshift.org/latest/getting_started/administrators.html (I'm using the containerized version), things work well until I get to the step where I am trying to tag v2 of the deployment example with oc tag deployment-example:v2 deployment-example:latest

Instead of success, I get the following error message:

error: "deployment-example:v2" is not currently pointing to an image, cannot use it as the source of a tag

Version

oc v1.1.5-93-g84434f4
kubernetes v1.2.0-36-g4a3f9c5

Steps To Reproduce

Follow the guide at https://docs.openshift.org/latest/getting_started/administrators.html

Current Result

error: "deployment-example:v2" is not currently pointing to an image, cannot use it as the source of a tag

Expected Result

The app should be updated to v2 and viewing the app should display "v2"

Additional Information

[Note] Determining if client configuration exists for client/cluster diagnostics
Info: Successfully read a client config file at '/var/lib/origin/openshift.local.config/master/admin.kubeconfig'
Info: Using context for cluster-admin access: 'default/192-168-90-55:8443/system:admin'

[Note] Running diagnostic: ConfigContexts[/192-168-90-55:8443/test]
Description: Validate client config context is complete and has connectivity

Info: For client config context '/192-168-90-55:8443/test':
The server URL is 'https://192.168.90.55:8443'
The user authentication is 'test/192-168-90-55:8443'
The current project is 'default'
Successfully requested project list; has access to project(s):
[test]

[Note] Running diagnostic: ConfigContexts[default/192-168-90-55:8443/system:admin]
Description: Validate client config context is complete and has connectivity

Info: For client config context 'default/192-168-90-55:8443/system:admin':
The server URL is 'https://192.168.90.55:8443'
The user authentication is 'system:admin/192-168-90-55:8443'
The current project is 'default'
Successfully requested project list; has access to project(s):
[default openshift openshift-infra test]

[Note] Running diagnostic: DiagnosticPod
Description: Create a pod to run diagnostics from the application standpoint

WARN: [DCli2013 from diagnostic DiagnosticPod@openshift/origin/pkg/diagnostics/client/run_diagnostics_pod.go:157]
See the warnings below in the output from the diagnostic pod:
[Note] Running diagnostic: PodCheckAuth
Description: Check that service account credentials authenticate as expected

   Info:  Service account token successfully authenticated to master

   WARN:  [DP1007 from diagnostic PodCheckAuth@openshift/origin/pkg/diagnostics/pod/auth.go:116]
          DNS resolution for registry address docker-registry.default.svc.cluster.local returned no results; either the integrated registry is not deployed, or container DNS configuration is incorrect.

   [Note] Running diagnostic: PodCheckDns
          Description: Check that DNS within a pod works as expected

   [Note] Summary of diagnostics execution (version v1.1.5):
   [Note] Warnings seen: 1

[Note] Running diagnostic: ClusterRegistry
Description: Check that there is a working Docker registry

WARN: [DClu1002 from diagnostic ClusterRegistry@openshift/origin/pkg/diagnostics/cluster/registry.go:186]
There is no "docker-registry" service in project "default". This is not strictly required to
be present; however, it is required for builds, and its absence probably
indicates an incomplete installation.

   Please consult the documentation and use the 'oadm registry' command
   to create a Docker registry.

[Note] Running diagnostic: ClusterRoleBindings
Description: Check that the default ClusterRoleBindings are present and contain the expected subjects

[Note] Running diagnostic: ClusterRoles
Description: Check that the default ClusterRoles are present and contain the expected permissions

[Note] Running diagnostic: ClusterRouterName
Description: Check there is a working router

WARN: [DClu2001 from diagnostic ClusterRouter@openshift/origin/pkg/diagnostics/cluster/router.go:129]
There is no "router" DeploymentConfig. The router may have been named
something different, in which case this warning may be ignored.

   A router is not strictly required; however it is needed for accessing
   pods from external networks and its absence likely indicates an incomplete
   installation of the cluster.

   Use the 'oadm router' command to create a router.

[Note] Running diagnostic: MasterNode
Description: Check if master is also running node (for Open vSwitch)

Info: Found a node with same IP as master: cropenshift-1.novalocal

[Note] Running diagnostic: NodeDefinitions
Description: Check node records on master

[Note] Summary of diagnostics execution (version v1.1.5-93-g84434f4):
[Note] Warnings seen: 3

aredocumentation componenimageregistry prioritP2

Most helpful comment

I found this https://github.com/openshift/openshift-docs/issues/2459 which suggests an alternative command which worked:

oc tag --source=docker openshift/deployment-example:v2 deployment-example:latest

All 4 comments

I found this https://github.com/openshift/openshift-docs/issues/2459 which suggests an alternative command which worked:

oc tag --source=docker openshift/deployment-example:v2 deployment-example:latest

Hit the same problem, and adding the --source=docker bit worked. It would be nice if the documentation could be updated.

Today the --source=docker bit still works, partially, but the deployment doesn't seem to happen nor is the code updated from the expected v1 to v2 on the curl. A little confused about where to go from here.

Was this page helpful?
0 / 5 - 0 ratings