Origin: How start app on different nodes over console (oc new-app)

Created on 16 Nov 2016  路  17Comments  路  Source: openshift/origin

oc new-app on different nodes

Version

oc v1.3.0
kubernetes v1.3.0+52492b4
features: Basic-Auth GSSAPI Kerberos SPNEGO

openshift v1.3.0
kubernetes v1.3.0+52492b4

Steps To Reproduce

after run command all apps start on one node, but need run on different nodes (app1 - start on node1, app2 on node2)

Current Result

after run command all apps start on one node

Expected Result
Additional Information

[try to run $ oadm diagnostics command if possible]
[if you are reporting issue related to builds, provide build logs with BUILD_LOGLEVEL=5]
[consider attaching output of the $ oc get all -o json -n <namespace> command to the issue]
[visit https://docs.openshift.org/latest/welcome/index.html]

componencli kinquestion prioritP3

All 17 comments

@Evgeniy-Bondarenko why would you care about on which nodes the app runs?

@mfojtik

why would you care about on which nodes the app runs?

Database failover(for minimal downtime).

I have 3 node in openshift cluster and I want run database instance in each node:
db1 - node1
db2 - node2
db3 - node3

This way give minimal downtime (if anything node will be unavailable)

  1. What is the key when you start the app allows you to assign the selected node?
  2. Is there any other way for HA solutions?

@Evgeniy-Bondarenko you can use node selectors: http://kubernetes.io/docs/user-guide/node-selection/

After create and run app?

oc new-app --docker-image=mongo --name2
oc label nodes node2

right?

@Evgeniy-Bondarenko the problem is that if you pin the db3 on node3 and node3 goes down, without pinning the db3 (if it ran on node3) will be rescheduled to other available nodes. If you pin it with node selector, the db3 will be down and waiting for the node to come back.

@Evgeniy-Bondarenko you can use --labels in new-app to add labels to every resource it creates (or do it afterwards).

oc new-app --docker-image=mongo --name mongo2 -n database  --labels=node2
error: unexpected label spec: node2

but I have node:

oc get nodes
NAME        STATUS    AGE
node1   Ready     1d
node2   Ready     1d
node3   Ready     1d

@mfojtik please, write me a sample command for run on different node

@mfojtik

the problem is that if you pin the db3 on node3 and node3 goes down, without pinning the db3 (if it ran on node3) will be rescheduled to other available nodes. If you pin it with node selector, the db3 will be down and waiting for the node to come back.

whether it is possible to manually reschedule/move applications between nodes for balancer/ha solutions (with or without pinning to node)?

oc label nodes node1 name=node1 (you need to be cluster admin for that)

Then edit the PodSpec (DeploymentConfig?) and add

nodeSelector:
    name: node1

(check the linked kubernetes docs that explains it)

$ oc label nodes node1 name=node1
node "node1" labeled
$ oc label nodes node2 name=node2
node "node2" labeled
$ oc label nodes node3 name=node3
node "node3" labeled
$ oc  get nodes --show-labels
NAME        STATUS    AGE       LABELS
node1   Ready     1d        beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=node1,name=node1
node2   Ready     1d        beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=node2,name=node2
node3   Ready     1d        beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=node3,name=node3

$ oc new-app --docker-image=mongo --name mongo6 -n database --labels=name=node2
--> Found Docker image 25427f3 (6 weeks old) from Docker Hub for "mongo"

--> Creating resources with label name=node2 ...
    imagestream "mongo6" created
    deploymentconfig "mongo6" created
    service "mongo6" created
--> Success
    WARNING: No Docker registry has been configured with the server. Automatic builds and deployments may not function.
    Run 'oc status' to view your app.
 $ oc get pods -o wide
NAME             READY     STATUS             RESTARTS   AGE       IP           NODE
mongo-1-ehvta    1/1       Running            0          55m       172.17.0.2   node1
mongo2-1-ygzv5   1/1       Running            0          11m       172.17.0.3   node1
mongo3-1-scqgy   1/1       Running            0          10m       172.17.0.4   node1
mongo4-1-17pi2   1/1       Running            0          6m        172.17.0.5   node1
mongo6-1-4papw   1/1       Running            0          7s        172.17.0.6   node1
nginx-1-x5y9k    1/1       Running            0          2h        172.17.0.2   node2

@mfojtik still running the node1

@mfojtik only when deploy using node2

$oc get pods -o wide
mongo8-1-deploy   1/1       Running   0          1s        172.17.0.3   node2

but after deploy, pods is started on node1

$oc get pods -o wide
mongo8-1-1-fmp0x   1/1       Running   0          3m        172.17.0.8   node1

oc new-app --label only labels what you deploy, just like you labelled your nodes. You need to read the link again and add the nodeselector:

  nodeSelector:
    name: node2

yeah new-app --label is not useful for this. you'd have to add the nodeselector to your pod templates(part of the deploymentconfig) after new-app creates them.

i doubt we'd ever add a flag to new-app to let you specify a nodeselector to be added at the time new-app generates the deploymentconfig/podtemplate (because it's just one of many things you might ideally want to control and we aren't going to add flags for all of them).

if you want to avoid deploying to the wrong node, use oc new-app -o yaml > file.yaml and add the nodeselector to the yaml, then oc create -f file.yaml

@Evgeniy-Bondarenko please let us know if we've answered your question/you've been able to get the node scheduling behavior you desire.

@bparees

yeah new-app --label is not useful for this. you'd have to add the nodeselector to your pod templates(part of the deploymentconfig) after new-app creates them.

Yes, this way is not work (apps start random or first node)

i doubt we'd ever add a flag to new-app to let you specify a nodeselector to be added at the time new-app generates the deploymentconfig/podtemplate (because it's just one of many things you might ideally want to control and we aren't going to add flags for all of them).

Is it possible to change a template to automatically add a section nodeselector or use anything key in start command?

if you want to avoid deploying to the wrong node, use oc new-app -o yaml > file.yaml and add the nodeselector to the yaml, then oc create -f file.yaml

 $ oc new-app -o yaml
error: You must specify one or more images, image streams, templates, or source code locations to create an application.

To list all local templates and image streams, use:

  oc new-app -L

To search templates, image streams, and Docker images that match the arguments provided, use:

  oc new-app -S php
  oc new-app -S --template=ruby
  oc new-app -S --image-stream=mysql
  oc new-app -S --docker-image=python

See 'oc new-app -h' for help and examples.

add to your example necessary image

$ oc new-app --docker-image=mongo -o yaml
apiVersion: v1
items:
- apiVersion: v1
  kind: ImageStream
  metadata:
    annotations:
      openshift.io/generated-by: OpenShiftNewApp
    creationTimestamp: null
    labels:
      app: mongo
    name: mongo
  spec:
    tags:
    - annotations:
        openshift.io/imported-from: mongo
      from:
        kind: DockerImage
        name: mongo
      generation: null
      importPolicy: {}
      name: latest
  status:
    dockerImageRepository: ""
- apiVersion: v1
  kind: DeploymentConfig
  metadata:
    annotations:
      openshift.io/generated-by: OpenShiftNewApp
    creationTimestamp: null
    labels:
      app: mongo
    name: mongo
  spec:
    replicas: 1
    selector:
      app: mongo
      deploymentconfig: mongo
    strategy:
      resources: {}
    template:
      metadata:
        annotations:
          openshift.io/container.mongo.image.entrypoint: '["/entrypoint.sh","mongod"]'
          openshift.io/generated-by: OpenShiftNewApp
        creationTimestamp: null
        labels:
          app: mongo
          deploymentconfig: mongo
      spec:
        containers:
        - image: mongo
          name: mongo
          ports:
          - containerPort: 27017
            protocol: TCP
          resources: {}
          volumeMounts:
          - mountPath: /data/configdb
            name: mongo-volume-1
          - mountPath: /data/db
            name: mongo-volume-2
        volumes:
        - emptyDir: {}
          name: mongo-volume-1
        - emptyDir: {}
          name: mongo-volume-2
    test: false
    triggers:
    - type: ConfigChange
    - imageChangeParams:
        automatic: true
        containerNames:
        - mongo
        from:
          kind: ImageStreamTag
          name: mongo:latest
      type: ImageChange
  status: {}
- apiVersion: v1
  kind: Service
  metadata:
    annotations:
      openshift.io/generated-by: OpenShiftNewApp
    creationTimestamp: null
    labels:
      app: mongo
    name: mongo
  spec:
    ports:
    - name: 27017-tcp
      port: 27017
      protocol: TCP
      targetPort: 27017
    selector:
      app: mongo
      deploymentconfig: mongo
  status:
    loadBalancer: {}
kind: List
metadata: {}

@bparees @andrewklau What line change so I can confidently run the application on the required node?

you need to introduce a spec.template.nodeselector

https://docs.openshift.org/latest/dev_guide/deployments/basic_deployment_operations.html#assigning-pods-to-specific-nodes

Problem resolved after:

  1. add label on nodes
$ oc label nodes node1 name=node1
node "node1" labeled
$ oc label nodes node2 name=node2
node "node2" labeled
  1. run with label
$ oc new-app --docker-image=mongo --name mongo2 -n database --labels=name=node2
  1. and add sections deployment config(edit deployment config in webui) after run pod
  nodeSelector:
    name: node2

Thank you for support!
Please take into account the need have the opportunity to manually select a node in the following versions openshift interface(not edit yaml).

Was this page helpful?
0 / 5 - 0 ratings