Jx: DevPod can't be deployed because of deprecated skaffold digest hex

Created on 3 Apr 2019  路  19Comments  路  Source: jenkins-x/jx

Summary

DevPods worked for me until a few days ago, when suddenly they wouldn't be deployed anymore. I'd just get an ingress error 503. The problem is rooted in the skaffold.yaml, I think. It says:

    ...
    profiles:
    - name: dev
      build:
        artifacts:
        - docker: {}
        tagPolicy:
          envTemplate:
            template: '{{.DOCKER_REGISTRY}}/organisation/repo:{{.DIGEST_HEX}}'
        local: {}
      deploy:
        helm:
          releases:
          - name: brownbag
            chartPath: charts/brownbag
            setValueTemplates:
              image.repository: '{{.DOCKER_REGISTRY}}/organisation/repo'
              image.tag: '{{.DIGEST_HEX}}'

Since Skaffold v0.23.0, DIGEST_HEX and other DIGEST variables are deprecated and replaced by _DEPRECATED_<value_name>_, which affects pushing the images to the docker registry. So when running watch.sh, they are not tagged with a digest hex, but with the string literal _DEPRECATED_DIGEST_HEX_:

   curl docker-registry.jx.<ip>.nip.io/v2/organisation/repo/tags/list | jq .
    {
      "name": "organisation/repo",
      "tags": [
        "_DEPRECATED_DIGEST_HEX_",
        "0.0.1"
      ]
    }

Nonetheless, when retrieving the same value for the helm chart a few lines down, the replacement works as intended, and so the deployments expects the actual digest hex value as the tag. The pod creation fails with ImagePullBackOff:


   kubectl describe pod repo-repo-589d4d987-5sbqk -n=jx-edit-simon

   Events:
   Type     Reason     Age                     From                                                Message
   ----     ------     ----                    ----                                                -------
   Normal   Scheduled  8m44s                   default-scheduler                                   Successfully assigned  jx-edit-simon/repo-repo-589d4d987-5sbqk to gke-beakripple-default-pool-21d1dc97-p7gp
   Normal   Pulling    7m14s (x4 over 8m43s)   kubelet, gke-beakripple-default-pool-21d1dc97-p7gp  pulling image  "10.35.243.48:5000/organisation/repo:03efa48a18fd3b9310db6637abcb8e1f3cc1656ff0d8939b71713f194a3b97a6"
   Warning  Failed     7m14s (x4 over 8m43s)   kubelet, gke-beakripple-default-pool-21d1dc97-p7gp  Failed to pull image  "10.35.243.48:5000/organisation/repo:03efa48a18fd3b9310db6637abcb8e1f3cc1656ff0d8939b71713f194a3b97a6": rpc error: code =  Unknown desc = Error response from daemon: manifest for 10.35.243.48:5000/organisation/repo:03efa48a18fd3b9310db6637abcb8e1f3cc1656ff0d8939b71713f194a3b97a6 not found
   Warning  Failed     7m14s (x4 over 8m43s)   kubelet, gke-beakripple-default-pool-21d1dc97-p7gp  Error: ErrImagePull
   Warning  Failed     6m46s (x7 over 8m42s)   kubelet, gke-beakripple-default-pool-21d1dc97-p7gp  Error: ImagePullBackOff
   Normal   BackOff    3m33s (x21 over 8m42s)  kubelet, gke-beakripple-default-pool-21d1dc97-p7gp  Back-off pulling image  "10.35.243.48:5000/organisation/repo:03efa48a18fd3b9310db6637abcb8e1f3cc1656ff0d8939b71713f194a3b97a6"

Steps to reproduce the behavior

Using the versions specified below, set up a DevPod by running jx create devpod -l go --reuse --sync and start both watch.sh in the DevPod and jx sync locally.

Expected behavior

When files change, the images are tagged correctly, so that they can be found during deployment into my dev environment.

Actual behavior

The image is pushed to the registry with the tag _DEPRECATED_DIGEST_HEX_ and the helm chart expects the value of the digest hex when deploying into in the dev environment.

Jx version

The output of jx version is:

    NAME               VERSION
    jx                 1.3.1072
    Kubernetes cluster v1.11.7-gke.12
    kubectl            v1.14.0
    helm client        v2.13.0+g79d0794
    helm server        v2.13.0+g79d0794
    git                git version 2.17.1
    Operating System   Ubuntu 18.04.2 LTS

The putput of skaffold version in the devpod is:

    WARN[0000] Using SKAFFOLD_DEPLOY_NAMESPACE env variable is deprecated. Please use SKAFFOLD_NAMESPACE instead.
    WARN[0000] Using SKAFFOLD_DEPLOY_NAMESPACE env variable is deprecated. Please use SKAFFOLD_NAMESPACE instead.
    WARN[0000] Using SKAFFOLD_DEPLOY_NAMESPACE env variable is deprecated. Please use SKAFFOLD_NAMESPACE instead.
    WARN[0000] Using SKAFFOLD_DEPLOY_NAMESPACE env variable is deprecated. Please use SKAFFOLD_NAMESPACE instead.
    WARN[0000] Using SKAFFOLD_DEPLOY_NAMESPACE env variable is deprecated. Please use SKAFFOLD_NAMESPACE instead.
    v0.25.0

Jenkins type

  • [X] Classic Jenkins

Kubernetes cluster

GKE, using jx create cluster --provider gke

Operating system / Environment

see above

Workaround

I came up with a unpractical workaround: I added an environment variable containing the datetime at the skaffold run, which replaces the digest hex. I would much rather fix this with one of the supported tag policies, like SHA256. Unfortunately, this can't be combined with the envTemplate approach, so you'd have to hard-code the docker registry address. Can you evaluate if my assumptions are correct and if there are strategies to fix this without breaking the current workflow? Let me know, I'd be happy to fix this if possible.

aredevpod arejenkins kinbug lifecyclrotten prioritimportant-longterm

Most helpful comment

Workaround:

#!/usr/bin/env bash

# watch the java files and continously deploy the service
mvn clean install
export UUID=$(uuidgen)
skaffold run -p dev
reflex -r "\.java$" -- bash -c 'export UUID=$(uuidgen) && mvn install && skaffold run -p dev'

Then on the skaffold.yaml

        template: '{{.DOCKER_REGISTRY}}/foxsports/bedrock-k8s:{{.UUID}}'
        setValueTemplates:
          #      TODO: CHANGE_ME
          image.repository: '{{.DOCKER_REGISTRY}}/foxsports/bedrock-k8s'
          image.tag: '{{.UUID}}'

All 19 comments

encountering the same issue (jx 2.0.41) and the solution is working for me - many thanks for the workaround !

May I ask what you practically wrote into the skaffold.yaml to achieve this? If I just use dateTime, I am getting an "Error response from daemon: invalid reference format".

May I ask what you practically wrote into the skaffold.yaml to achieve this? If I just use dateTime, I am getting an "Error response from daemon: invalid reference format".

in my skaffold.yaml, I changed DIGEST_HEX to CURRENT_TIME, and then after making the devpod, from the pod's shell (before building with ./watch.sh)
export CURRENT_TIME="201904250817a"

Thanks, I didn't know that variable.

I also tried downgrade of skaffold to 0.22, which seems to work.

@jeremy-seiu @eickler I put the environment variable in the Makefile so I wouldn't have to do that extra manual step everytime I spin up the devpod. But again, I feel like this workaround is impractical and just not the right way to solve this. Anyway, I'm glad you could use it as a temporary fix.

@SimonKienzler that's certainly a more clever way to go about it ! also, agreed - I hope someone will put some eyes on it soon

Thanks, I didn't know that variable.

I also tried downgrade of skaffold to 0.22, which seems to work.

well, it is a made up variable - basically what we're doing is changing the digest-based tag to a timestamp tag + environment variable for the built image (I think that sounds about right?!)

Workaround:

#!/usr/bin/env bash

# watch the java files and continously deploy the service
mvn clean install
export UUID=$(uuidgen)
skaffold run -p dev
reflex -r "\.java$" -- bash -c 'export UUID=$(uuidgen) && mvn install && skaffold run -p dev'

Then on the skaffold.yaml

        template: '{{.DOCKER_REGISTRY}}/foxsports/bedrock-k8s:{{.UUID}}'
        setValueTemplates:
          #      TODO: CHANGE_ME
          image.repository: '{{.DOCKER_REGISTRY}}/foxsports/bedrock-k8s'
          image.tag: '{{.UUID}}'

Workaround:

```

!/usr/bin/env bash

watch the java files and continously deploy the service

mvn clean install
export UUID=$(uuidgen)
skaffold run -p dev

    setValueTemplates:
      #      TODO: CHANGE_ME
      image.repository: '{{.DOCKER_REGISTRY}}/foxsports/bedrock-k8s'
      image.tag: '{{.UUID}}'

```

this works great for me in a nodejs environment as well, thanks !

It is described here in the Skaffold releases notes. https://github.com/GoogleContainerTools/skaffold/blob/master/CHANGELOG.md#v0230-release---2142019

Which makes me think more and more people will run into this.
Let's see if we can find a proper fix for this that works for everyone.

Those workarounds are simply not necessary. Here's my first try (tested, working on GKE + GCR, Skaffold 0.29) which I came up with just by reading Skaffold documentation.

DIGEST_HEX etc are deprecated for a reason: a new tagging mechanism exists, that does this without using environment variables (envTemplate -> tagPolicy: sha256).

apiVersion: skaffold/v1beta10
kind: Config
build:
  artifacts:
  - image: YOUR_PROJECTNAMESPACE/IMAGE
    context: .
    docker: {}
  tagPolicy:
    envTemplate:
      template: '{{.DOCKER_REGISTRY}}/{{.IMAGE_NAME}}:{{.VERSION}}'
  local: {}
deploy:
  kubectl: {}
profiles:
- name: dev
  build:
    tagPolicy:
        sha256: {}
    artifacts:
    - image: YOUR_DOCKER_REGISTRY/NAMESPACE/APP
    local: {}
  deploy:
    helm:
      releases:
      - name: goqs
        chartPath: charts/goqs
        setValueTemplates:
          # NOT CHANGED! This may threw a warning.
          image.repository: '{{.DOCKER_REGISTRY}}/{{.IMAGE_NAME}}'
          image.tag: '{{.DIGEST}}'

Removal of DIGEST_HEX etc is approaching - an issue mentions "not before 15/05/2019". The quickstarts should be updated.

@freeo Thanks for pointing that out. Did you hard-code the Docker registry URI? I tried using the SHA256 tagging strategy, but could not use the templating approach that the old solution allowed for (by replacing the parameters in the envTemplate). If you found a way to combine both, that's great and I'd like to know how!

/area jenkins

is there any movement on this? I've been getting by with settig a new UUID (sha256 isn't available in the nodejs devpod) each build, but this is not ideal : )

I am also having this issue but the workaround seems to work.

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Provide feedback via https://jenkins-x.io/community.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Provide feedback via https://jenkins-x.io/community.
/lifecycle rotten

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Provide feedback via https://jenkins-x.io/community.
/close

@jenkins-x-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Provide feedback via https://jenkins-x.io/community.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the jenkins-x/lighthouse repository.

Was this page helpful?
0 / 5 - 0 ratings