Generating tags...
- registry.<domain>.org/<image1>-> registry.<domain>.org/<image1>:2019-11-18_11-14-29.257_EST
- registry.<domain>.org/<image2> -> registry.<domain>.org/<image2>:2019-11-18_11-14-29.257_EST
Checking cache...
- registry.<domain>.org/<image1>: Found. Tagging
- registry.<domain>.org/<image2>: Found. Tagging
Tags used in deployment:
- registry.<domain>.org/<image1> -> registry.<domain>.org/<image1>:2019-11-18_11-14-29.257_EST@sha256:8b7e59345b898fa03fddeb51f1bba588df6b44fd91d72155f2032b2bdac20eb6
- registry.<domain>.org/<image2> -> registry.<domain>.org/<image2>:2019-11-18_11-14-29.257_EST@sha256:5e9619cd5bf34917956cde93eb9697722512d8980a29f9986bba9514d0e96cfd
Deployed images are tagged with git commit info or datetime and are consistent
Skaffold appends sha256 hash to deployment, causing inconsistent tags
Hi @ConnorBarnhill, thank you for the question!
A digest uniquely identifies each image and pinning to a digest is considered a best practice.
In your case you have 2 artifacts - they have the same _tag_ (probably different image names) and different digests as they are different images.
I hope I answered your question?
Hi @balopat,
Thanks for the response. That's correct, but this appears to be new behavior. Previously I was able to reference an image using only the provided tag but now the digest is appended and breaks the existing workflow.
Image 1 is responsible for launching a kubernetes pod with image 2 in response to certain events. Ideally image 1 should be launching image 2 that is of the same version. I can no longer do this with the digest appended
Theoretically the tag should be there without a problem, if you cut the digest off, you can still reference both images with the tag only. How does "image1" pod know about the version?
image1 retrieves the version via an environment variable that reads from the helm values. The tag in the environment variable is specified correctly but whenever it tries to pull image2 it can't find the tag. I'm not sure I can pull without knowing the digest
@ConnorBarnhill can you point to the last version of skaffold that this was working on? I don't think anything has changed here recently.
maybe I'm misunderstanding your problem, but when I run skaffold on the microservices project in our examples, I see similar tagging behavior to what you're describing, but I can reference the images later on by tag alone (without the digest):
➜ microservices git:(master) ✗ skaffold dev
Listing files to watch...
- gcr.io/k8s-skaffold/leeroy-web
- gcr.io/k8s-skaffold/leeroy-app
Generating tags...
- gcr.io/k8s-skaffold/leeroy-web -> gcr.io/nkubala-demo/gcr.io/k8s-skaffold/leeroy-web:2019-12-02_13-03-55.627_PST
- gcr.io/k8s-skaffold/leeroy-app -> gcr.io/nkubala-demo/gcr.io/k8s-skaffold/leeroy-app:2019-12-02_13-03-55.627_PST
Checking cache...
- gcr.io/k8s-skaffold/leeroy-web: Found. Tagging
- gcr.io/k8s-skaffold/leeroy-app: Found. Tagging
Tags used in deployment:
- gcr.io/k8s-skaffold/leeroy-web -> gcr.io/nkubala-demo/gcr.io/k8s-skaffold/leeroy-web:2019-12-02_13-03-55.627_PST@sha256:648f63305d439f3b1fdf39a9f74cf532fe571a4b8a2c90e09f9ffa234c5953b1
- gcr.io/k8s-skaffold/leeroy-app -> gcr.io/nkubala-demo/gcr.io/k8s-skaffold/leeroy-app:2019-12-02_13-03-55.627_PST@sha256:49a43f68dedc295badd7fa6ba9f4dbdc19d7e8a2002c7d1e84caa2773329a56a
Starting deploy...
- deployment.apps/leeroy-web created
- service/leeroy-app created
- deployment.apps/leeroy-app created
➜ microservices git:(master) ✗ docker images | grep leeroy
gcr.io/nkubala-demo/gcr.io/k8s-skaffold/leeroy-app 2019-12-02_13-00-27.863_PST b930814d7c8a 3 minutes ago 12.9MB
gcr.io/nkubala-demo/gcr.io/k8s-skaffold/leeroy-web 2019-12-02_13-00-27.863_PST 1cc9b6012a63 3 minutes ago 13MB
➜ microservices git:(master) ✗ docker rmi b930814d7c8a
Untagged: gcr.io/nkubala-demo/gcr.io/k8s-skaffold/leeroy-app:2019-12-02_13-00-27.863_PST
Untagged: gcr.io/nkubala-demo/gcr.io/k8s-skaffold/leeroy-app@sha256:49a43f68dedc295badd7fa6ba9f4dbdc19d7e8a2002c7d1e84caa2773329a56a
Deleted: sha256:b930814d7c8a6a5ef151771e237e4985dff3bc3c0fd7530e0491ed9b1d7a6265
Deleted: sha256:2a45e05ba044ed4e9ec58e96948a3e0a69ae1887557fe2eb3308c4531d2b6d2c
Deleted: sha256:9332663abcffccc1886889ec6d1db83230aecde5d52645cf9242d9aef451bffa
➜ microservices git:(master) ✗ docker pull gcr.io/nkubala-demo/gcr.io/k8s-skaffold/leeroy-app:2019-12-02_13-00-27.863_PST
2019-12-02_13-00-27.863_PST: Pulling from nkubala-demo/gcr.io/k8s-skaffold/leeroy-app
89d9c30c1d48: Already exists
17e2a70a4f43: Pull complete
Digest: sha256:49a43f68dedc295badd7fa6ba9f4dbdc19d7e8a2002c7d1e84caa2773329a56a
Status: Downloaded newer image for gcr.io/nkubala-demo/gcr.io/k8s-skaffold/leeroy-app:2019-12-02_13-00-27.863_PST
gcr.io/nkubala-demo/gcr.io/k8s-skaffold/leeroy-app:2019-12-02_13-00-27.863_PST
HI! I have same issue. Have you any workaround?
I have the same issue as well, the sha256 hash in deployment is creating issues when I deploy in OpenShift.
I have same problem with Openshift too. I guess it's a bug in Openshift though.
FYI, this also breaks when deploying to a mixed architecture kubernetes cluster (arm32v7 and amd64).
Here is the manifest of the multi-arch image in the registry, you can see that the arm version has a different sha256 than the amd64 version of the image as expected.
{
"schemaVersion": 2,
"mediaType": "application/vnd.docker.distribution.manifest.list.v2+json",
"manifests": [
{
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"size": 2205,
"digest": "sha256:1f18c93a15a77984399e4385dbdfa7c2d2ae2db1d3cd47309b12e6b17b03e17d",
"platform": {
"architecture": "arm",
"os": "linux",
"variant": "v7"
}
},
{
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"size": 2205,
"digest": "sha256:b956a740fef85324f8f98b763c22d0a388e8a9b9aab3d0b5b3cc4a3ca9416841",
"platform": {
"architecture": "amd64",
"os": "linux"
}
}
]
}
however, only the sha256 for the amd64 version of the image is getting applied to the deployment. The only way I can figure around this is to create two distinct images, and two distinct deployments for each architecture which is not ideal or get a way to prevent skaffold from attaching the sha256 via a flag
Image: registry.thesniderpad.com/cava-webhook:2020-04-15_20-11-17.847_MDT@sha256:b956a740fef85324f8f98b763c22d0a388e8a9b9aab3d0b5b3cc4a3ca9416841
If someone having issues, today I experimented with OpenShift 4.3 and hit this issue and additional one.
build:
tagPolicy:
envTemplate:
template: "{{.IMAGE_NAME}}"
Thanks to @pdettori for the tip
rw- and read only r-- in my case root:root, when skaffold uses tar to send the files it can't override the files because group only has read not write. workaroundFROM registry.access.redhat.com/ubi8/nodejs-12 as base
FROM base
...
COPY --chown=1001:0 server server
RUN chmod -R g=u server
Since I'm using ubi image default USER is 1001, if I don't use --chown=1001:0 the RUN command will fail to run as user 1001 because the files were COPY as root:root. Need to copy with -chown=1001:0 to make the RUN work properly
I shared some of the examples I worked out today for a demo.
Using a simple container https://github.com/csantanapr/think2020-nodejs/blob/master/2-kuberntes/skaffold.yaml
Using 2 containers https://github.com/csantanapr/think2020-nodejs/blob/master/3-operations/skaffold.yaml
@davidasnider Skaffold doesn't build multi-arch images at the moment. You might be able to do jerry-rig something together with our custom builder.
For those of you who are encountering this issue, could you please describe your situation — the cluster type and registry you're pushing to, and the vendor? Are you trying to build multi-architecture projects too? A small example project would be ideal.
Hi, I am using a custom build command
build:
artifacts:
- image: registry.thesniderpad.com/cava-webhook
context: src/webhook
custom:
buildCommand: ./build.sh
Here's the ./build.sh, it's just using docker, but it does build the multiplatform image and push to a local docker registry instance.
docker buildx build --no-cache --platform linux/arm/v7,linux/amd64 --tag $IMAGE --push $BUILD_CONTEXT
for the time being I've stopped creating the amd64 image as most of my k8s nodes are raspberry pi's. Buildx with a single platform works fine with the sha256
I'm going to close this issue as the original question was answered and subsequent me-toos do not have sufficient detail to identify the problems being encountered. If you're still encountering problems, please open new issues.
Most helpful comment
I have the same issue as well, the sha256 hash in deployment is creating issues when I deploy in OpenShift.