Origin: Improve high level yaml/json parsing and error handling

Created on 2 Aug 2016  Â·  15Comments  Â·  Source: openshift/origin

Sometimes well-formed yaml/json files don't conform to openshift/kubernetes data structure. My DeploymentConfig was missing containers key and as a result I got a cryptic yaml parsing error: DeploymentConfig: only encoded map or array can be decoded into a struct. The error is too vague to novice users.

Version

openshift v3.3.0.4
kubernetes v1.3.0+57fb9ac
etcd 2.3.0+git

Steps To Reproduce
    template:
      metadata:
        labels:
          name: ${NAME}
        name: ${NAME}
      spec:
       containers:
       - image: docker.io/hawkularqe/hawkular-services:latest

remove containers:

Current Result

DeploymentConfig in version "v1" cannot be handled as a DeploymentConfig: only encoded map or array can be decoded into a struct.

Expected Result

A friendly message such as "Missing Containers object" would greatly improve the user experience.

areusability componenrestapi lifecyclrotten prioritP3

Most helpful comment

You mean like --validate? We default it to off today in oc, but you
can still specify it.

All 15 comments

@smarterclayton to make this possible, we will need to validate the input against the swagger schema to see what fields are missing, or what fields are extra, correct? Do you know if there is a plan in kube to integrate something like that to kubectl?

You mean like --validate? We default it to off today in oc, but you
can still specify it.

We should probably start turning validate on by default now that all the schema stuff is fixed end to end (if explain works, validate should work).

@fabianofranz opinions about turning --validate on by default?

no, client-side validate fails for schema-less types and required fields that are typically defaulted server-side

Michal, can you find the issue for server side dry run and dupe this on
that?

On Nov 11, 2016, at 1:06 AM, Jordan Liggitt [email protected]
wrote:

no, client-side validate fails for schema-less types and required fields
that are typically defaulted server-side

—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/openshift/origin/issues/10171#issuecomment-259913103,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABG_p6Mg-QYte6JdXFbFTCQu0rsLaS5pks5q9DAWgaJpZM4Ja6Fn
.

+1 for this.

My env:

OpenShift Master: v3.3.1.7
Kubernetes Master: v1.3.0+52492b4

I faced this issue trying to create the following BuildConfig definition:

kind: "BuildConfig"
apiVersion: "v1"
metadata:
  labels:
    app: "tomcat6-webapp-docker"
  name: "tomcat6-webapp-docker"
spec:
  output:
    to:
      kind: "ImageStreamTag"
      name: "tomcat6-webapp:latest"
  runPolicy: "Serial"
  source:
    type: "Git"
    git:
      uri: "http://gogs-cicd.cloud.<mydomain>.com/gogs/openshift-tomcat6-sample.git"
      contextDir: "openshift/docker"
  strategy:
    type: "Docker"
    dockerStrategy:
      env:
        name: "BUILD_LOGLEVEL"
        value: "2"
      from:
        kind: "ImageStreamTag"
        name: "tomcat:6.0.48-jre7"

from oc CLI

oc create -f openshift/template/tomcat6-docker-buildconfig.yaml                                                                                                                                                                      1 ↵
Error from server: error when creating "openshift/template/tomcat6-docker-buildconfig.yaml": BuildConfig in version "v1" cannot be handled as a BuildConfig: only encoded map or array can be decoded into a struct

after a while trying to figureout what was wrong... The issue was:

      env:
        name: "BUILD_LOGLEVEL"
        value: "2"

removing the above env snippet from my BuildConfig definition worked!

oc create -f openshift/template/tomcat6-docker-buildconfig.yaml                                                                                                                                                                      1 ↵
buildconfig "tomcat6-webapp-docker" created

The --validate didn't helpd at all

oc create --dry-run --validate -f openshift/template/buildconfig.yaml 1 ↵ error: error validating "openshift/template/buildconfig.yaml": error validating data: couldn't find type: v1.BuildConfig; if you choose to ignore these errors, turn validation off with --validate=false

Well, at least I hope these issue comments save some hours to someone....

I had the same error today:

$ oc create --dry-run --validate -f portal-intranet-dev-dc.yml error: error validating "portal-intranet-dev-dc.yml": error validating data: couldn't find type: v1.DeploymentConfig; if you choose to ignore these errors, turn validation off with --validate=false

It was an export from another openshift cluster.

$ cat portal-intranet-dev-dc.yml
apiVersion: v1
kind: DeploymentConfig
metadata:
  generation: 1
  labels:
    app: portal-intranet-dev
  name: portal-intranet-dev
spec:
  replicas: 1
  selector:
    app: portal-intranet-dev
    deploymentconfig: portal-intranet-dev
  strategy:
    activeDeadlineSeconds: 21600
    resources: {}
    rollingParams:
      intervalSeconds: 1
      maxSurge: 25%
      maxUnavailable: 25%
      timeoutSeconds: 600
      updatePeriodSeconds: 1
    type: Rolling
  template:
    metadata:
      labels:
        app: portal-intranet-dev
        deploymentconfig: portal-intranet-dev
    spec:
      containers:
        imagePullPolicy: Always
        name: portal-intranet-dev
        ports:
        - containerPort: 9999
          protocol: TCP
        - containerPort: 8080
          protocol: TCP
        - containerPort: 8009
          protocol: TCP
        - containerPort: 9990
          protocol: TCP
        resources:
          limits:
            cpu: "10"
            memory: 15Gi
          requests:
            cpu: "2"
            memory: 5Gi
        terminationMessagePath: /dev/termination-log
        volumeMounts:
        - mountPath: /opt/liferay/data
          name: volume-4mko9
        - mountPath: /opt/jboss/liferay/logs
          name: volume-st8e3
        - mountPath: /opt/jboss/liferay/jboss-eap-6.4.0/standalone/log
          name: volume-y3mox
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
      - name: volume-4mko9
        persistentVolumeClaim:
          claimName: portal-data-dev
      - name: volume-st8e3
        persistentVolumeClaim:
          claimName: portal-logs-dev
      - name: volume-y3mox
        persistentVolumeClaim:
          claimName: portal-standalone-logs-dev
  test: false
  triggers:
  - type: ConfigChange
  - imageChangeParams:
      automatic: true
      containerNames:
      - portal-intranet-dev
      from:
        kind: ImageStreamTag
        name: portal-intranet-dev:latest
        namespace: portal-intranet-dev
    type: ImageChange

Using the same exported DeploymentConfig using JSON it worked:

$ oc create -f portal-intranet-dev-dc.json deploymentconfig "portal-intranet-dev" created

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

/reopen
/remove-lifecycle rotten

@lomholdt: you can't re-open an issue/PR unless you authored it or you are assigned to it.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Was this page helpful?
0 / 5 - 0 ratings