Skaffold: Skaffold won't run deployment, error: exiting dev mode because first deploy failed: 1/1 deployment(s) failed

Created on 14 Aug 2020  Â·  8Comments  Â·  Source: GoogleContainerTools/skaffold

When i starting this deployment via

alex@desktop:~/projects/microservices/ticketing/k8s$ microk8s kubectl apply -f .
deployment.apps/auth-depl created
service/auth-srv created
ingress.networking.k8s.io/ingress-service created
alex@desktop:~/projects/microservices/ticketing/k8s$ microk8s kubectl get all
NAME                             READY   STATUS    RESTARTS   AGE
pod/auth-depl-7b6dffd964-kkkmj   1/1     Running   0          5s

NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/auth-srv     ClusterIP   10.152.183.88   <none>        3000/TCP   5s
service/kubernetes   ClusterIP   10.152.183.1    <none>        443/TCP    12h

NAME                        READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/auth-depl   1/1     1            1           5s

NAME                                   DESIRED   CURRENT   READY   AGE
replicaset.apps/auth-depl-7b6dffd964   1         1         1       5s
alex@desktop:~/projects/microservices/ticketing/k8s

Everything works fine and i got allservices run smooth

But when i trying to run it via Skaffold it doesn't works:

alex@desktop:~/projects/microservices/ticketing$ skaffold dev -v=debug
INFO[0000] starting gRPC server on port 50051           
INFO[0000] starting gRPC HTTP server on port 50052      
INFO[0000] Skaffold &{Version:v1.13.1 ConfigVersion:skaffold/v2beta6 GitVersion: GitCommit:1d10bd4779e3d5e991fcced067367c2c993f3e6e GitTreeState:clean BuildDate:2020-08-04T21:32:36Z GoVersion:go1.14.2 Compiler:gc Platform:linux/amd64} 
INFO[0000] Loaded Skaffold defaults from "/home/alex/.skaffold/config" 
DEBU[0000] config version "skaffold/v2alpha3" out of date: upgrading to latest "skaffold/v2beta6" 
INFO[0000] Using kubectl context: microk8s              
DEBU[0000] Using builder: local                         
DEBU[0000] setting Docker user agent to skaffold-v1.13.1 
Listing files to watch...
 - animalinstinct/auth
DEBU[0000] Found dependencies for dockerfile: [{package.json /app true} {. /app true}] 
DEBU[0000] Skipping excluded path: node_modules         
INFO[0000] List generated in 2.388898ms                 
Generating tags...
 - animalinstinct/auth -> DEBU[0000] Running command: [git describe --tags --always] 
DEBU[0000] Command output: [a6f6576
]                   
DEBU[0000] Running command: [git status . --porcelain]  
DEBU[0000] Command output: [ M auth/Dockerfile
 D auth/skaffold
 M auth/src/index.ts
?? auth/{
] 
animalinstinct/auth:a6f6576-dirty
INFO[0000] Tags generated in 4.779ms                    
Checking cache...
DEBU[0000] Found dependencies for dockerfile: [{package.json /app true} {. /app true}] 
DEBU[0000] Skipping excluded path: node_modules         
 - animalinstinct/auth: Found Locally
INFO[0000] Cache check complete in 7.342514ms           
Tags used in deployment:
 - animalinstinct/auth -> animalinstinct/auth:3c696c4506d3a14ff2b064ffbadad1d7b64b1ce69f0d13171c812e3fb289b263
DEBU[0000] Local images can't be referenced by digest.
They are tagged and referenced by a unique, local only, tag instead.
See https://skaffold.dev/docs/pipeline-stages/taggers/#how-tagging-works 
DEBU[0000] getting client config for kubeContext: ``    
Starting deploy...
DEBU[0000] Running command: [kubectl version --client -ojson] 
DEBU[0000] Command output: [{
  "clientVersion": {
    "major": "1",
    "minor": "18",
    "gitVersion": "v1.18.6",
    "gitCommit": "dff82dc0de47299ab66c83c626e08b245ab19037",
    "gitTreeState": "clean",
    "buildDate": "2020-07-16T14:19:25Z",
    "goVersion": "go1.13.13",
    "compiler": "gc",
    "platform": "linux/amd64"
  }
}
] 
DEBU[0000] Running command: [kubectl --context microk8s create --dry-run=client -oyaml -f /home/alex/projects/microservices/ticketing/k8s/auth-depl.yaml -f /home/alex/projects/microservices/ticketing/k8s/ingress-srv.yaml] 
DEBU[0000] Command output: [apiVersion: apps/v1
kind: Deployment
metadata:
  name: auth-depl
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: auth
  template:
    metadata:
      labels:
        app: auth
    spec:
      containers:
      - image: animalinstinct/auth
        name: auth
---
apiVersion: v1
kind: Service
metadata:
  name: auth-srv
  namespace: default
spec:
  ports:
  - name: auth
    port: 3000
    protocol: TCP
    targetPort: 3000
  selector:
    app: auth
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/use-regex: "true"
  name: ingress-service
  namespace: default
spec:
  rules:
  - host: ticketing.dev
    http:
      paths:
      - backend:
          serviceName: auth-srv
          servicePort: 3000
        path: /api/users/?(.*)
] 
DEBU[0000] manifests with tagged images: apiVersion: apps/v1
kind: Deployment
metadata:
  name: auth-depl
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: auth
  template:
    metadata:
      labels:
        app: auth
    spec:
      containers:
      - image: animalinstinct/auth:3c696c4506d3a14ff2b064ffbadad1d7b64b1ce69f0d13171c812e3fb289b263
        name: auth
---
apiVersion: v1
kind: Service
metadata:
  name: auth-srv
  namespace: default
spec:
  ports:
  - name: auth
    port: 3000
    protocol: TCP
    targetPort: 3000
  selector:
    app: auth
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/use-regex: "true"
  name: ingress-service
  namespace: default
spec:
  rules:
  - host: ticketing.dev
    http:
      paths:
      - backend:
          serviceName: auth-srv
          servicePort: 3000
        path: /api/users/?(.*) 
DEBU[0000] manifests with labels apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app.kubernetes.io/managed-by: skaffold
    skaffold.dev/run-id: b5f15f4e-f76c-4ef0-8a2a-c4e1c6aa852f
  name: auth-depl
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: auth
  template:
    metadata:
      labels:
        app: auth
        app.kubernetes.io/managed-by: skaffold
        skaffold.dev/run-id: b5f15f4e-f76c-4ef0-8a2a-c4e1c6aa852f
    spec:
      containers:
      - image: animalinstinct/auth:3c696c4506d3a14ff2b064ffbadad1d7b64b1ce69f0d13171c812e3fb289b263
        name: auth
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/managed-by: skaffold
    skaffold.dev/run-id: b5f15f4e-f76c-4ef0-8a2a-c4e1c6aa852f
  name: auth-srv
  namespace: default
spec:
  ports:
  - name: auth
    port: 3000
    protocol: TCP
    targetPort: 3000
  selector:
    app: auth
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/use-regex: "true"
  labels:
    app.kubernetes.io/managed-by: skaffold
    skaffold.dev/run-id: b5f15f4e-f76c-4ef0-8a2a-c4e1c6aa852f
  name: ingress-service
  namespace: default
spec:
  rules:
  - host: ticketing.dev
    http:
      paths:
      - backend:
          serviceName: auth-srv
          servicePort: 3000
        path: /api/users/?(.*) 
DEBU[0000] Running command: [kubectl --context microk8s get -f - --ignore-not-found -ojson] 
DEBU[0000] Command output: []                           
DEBU[0000] 3 manifests to deploy. 3 are updated or new  
DEBU[0000] Running command: [kubectl --context microk8s apply -f -] 
 - deployment.apps/auth-depl created
 - service/auth-srv created
 - ingress.networking.k8s.io/ingress-service created
INFO[0001] Deploy complete in 1.072179327s              
Waiting for deployments to stabilize...
DEBU[0001] getting client config for kubeContext: ``    
DEBU[0001] checking status deployment/auth-depl         
DEBU[0001] Running command: [kubectl --context microk8s rollout status deployment auth-depl --namespace default --watch=false] 
DEBU[0001] Command output: [Waiting for deployment "auth-depl" rollout to finish: 0 of 1 updated replicas are available...
] 
DEBU[0001] Pod "auth-depl-684dffdc59-pfn2w" scheduled: checking container statuses 
 - deployment/auth-depl: waiting for rollout to finish: 0 of 1 updated replicas are available...
    - pod/auth-depl-684dffdc59-pfn2w: creating container auth
DEBU[0001] Running command: [kubectl --context microk8s rollout status deployment auth-depl --namespace default --watch=false] 
DEBU[0001] Command output: [Waiting for deployment "auth-depl" rollout to finish: 0 of 1 updated replicas are available...
] 
DEBU[0001] Pod "auth-depl-684dffdc59-pfn2w" scheduled: checking container statuses 
DEBU[0002] Running command: [kubectl --context microk8s rollout status deployment auth-depl --namespace default --watch=false] 
DEBU[0002] Command output: [Waiting for deployment "auth-depl" rollout to finish: 0 of 1 updated replicas are available...
] 
DEBU[0002] Pod "auth-depl-684dffdc59-pfn2w" scheduled: checking container statuses 
DEBU[0002] Running command: [kubectl --context microk8s rollout status deployment auth-depl --namespace default --watch=false] 
DEBU[0002] Command output: [Waiting for deployment "auth-depl" rollout to finish: 0 of 1 updated replicas are available...
] 
DEBU[0002] Pod "auth-depl-684dffdc59-pfn2w" scheduled: checking container statuses 
DEBU[0002] Running command: [kubectl --context microk8s rollout status deployment auth-depl --namespace default --watch=false] 
DEBU[0003] Command output: [Waiting for deployment "auth-depl" rollout to finish: 0 of 1 updated replicas are available...
] 
DEBU[0003] Pod "auth-depl-684dffdc59-pfn2w" scheduled: checking container statuses 
DEBU[0003] Running command: [kubectl --context microk8s rollout status deployment auth-depl --namespace default --watch=false] 
DEBU[0003] Command output: [Waiting for deployment "auth-depl" rollout to finish: 0 of 1 updated replicas are available...
] 
DEBU[0003] Pod "auth-depl-684dffdc59-pfn2w" scheduled: checking container statuses 
DEBU[0003] Running command: [kubectl --context microk8s rollout status deployment auth-depl --namespace default --watch=false] 
DEBU[0003] Command output: [Waiting for deployment "auth-depl" rollout to finish: 0 of 1 updated replicas are available...
] 
DEBU[0003] Pod "auth-depl-684dffdc59-pfn2w" scheduled: checking container statuses 
DEBU[0003] Running command: [kubectl --context microk8s rollout status deployment auth-depl --namespace default --watch=false] 
DEBU[0004] Command output: [Waiting for deployment "auth-depl" rollout to finish: 0 of 1 updated replicas are available...
] 
DEBU[0004] Pod "auth-depl-684dffdc59-pfn2w" scheduled: checking container statuses 
DEBU[0004] Running command: [kubectl --context microk8s rollout status deployment auth-depl --namespace default --watch=false] 
DEBU[0004] Command output: [Waiting for deployment "auth-depl" rollout to finish: 0 of 1 updated replicas are available...
] 
DEBU[0004] Pod "auth-depl-684dffdc59-pfn2w" scheduled: checking container statuses 
DEBU[0004] Running command: [kubectl --context microk8s rollout status deployment auth-depl --namespace default --watch=false] 
DEBU[0004] Command output: [Waiting for deployment "auth-depl" rollout to finish: 0 of 1 updated replicas are available...
] 
DEBU[0004] Pod "auth-depl-684dffdc59-pfn2w" scheduled: checking container statuses 
DEBU[0004] Running command: [kubectl --context microk8s rollout status deployment auth-depl --namespace default --watch=false] 
DEBU[0005] Command output: [Waiting for deployment "auth-depl" rollout to finish: 0 of 1 updated replicas are available...
] 
DEBU[0005] Pod "auth-depl-684dffdc59-pfn2w" scheduled: checking container statuses 
DEBU[0005] Running command: [kubectl --context microk8s rollout status deployment auth-depl --namespace default --watch=false] 
DEBU[0005] Command output: [Waiting for deployment "auth-depl" rollout to finish: 0 of 1 updated replicas are available...
] 
DEBU[0005] Pod "auth-depl-684dffdc59-pfn2w" scheduled: checking container statuses 
DEBU[0005] Running command: [kubectl --context microk8s rollout status deployment auth-depl --namespace default --watch=false] 
DEBU[0005] Command output: [Waiting for deployment "auth-depl" rollout to finish: 0 of 1 updated replicas are available...
] 
DEBU[0005] Pod "auth-depl-684dffdc59-pfn2w" scheduled: checking container statuses 
DEBU[0005] Running command: [kubectl --context microk8s rollout status deployment auth-depl --namespace default --watch=false] 
DEBU[0006] Command output: [Waiting for deployment "auth-depl" rollout to finish: 0 of 1 updated replicas are available...
] 
DEBU[0006] Pod "auth-depl-684dffdc59-pfn2w" scheduled: checking container statuses 
DEBU[0006] Running command: [kubectl --context microk8s rollout status deployment auth-depl --namespace default --watch=false] 
DEBU[0006] Command output: [Waiting for deployment "auth-depl" rollout to finish: 0 of 1 updated replicas are available...
] 
DEBU[0006] Pod "auth-depl-684dffdc59-pfn2w" scheduled: checking container statuses 
DEBU[0006] Running command: [kubectl --context microk8s rollout status deployment auth-depl --namespace default --watch=false] 
DEBU[0006] Command output: [Waiting for deployment "auth-depl" rollout to finish: 0 of 1 updated replicas are available...
] 
DEBU[0006] Pod "auth-depl-684dffdc59-pfn2w" scheduled: checking container statuses 
DEBU[0006] Running command: [kubectl --context microk8s rollout status deployment auth-depl --namespace default --watch=false] 
DEBU[0007] Command output: [Waiting for deployment "auth-depl" rollout to finish: 0 of 1 updated replicas are available...
] 
DEBU[0007] Pod "auth-depl-684dffdc59-pfn2w" scheduled: checking container statuses 
DEBU[0007] Running command: [kubectl --context microk8s rollout status deployment auth-depl --namespace default --watch=false] 
DEBU[0007] Command output: [Waiting for deployment "auth-depl" rollout to finish: 0 of 1 updated replicas are available...
] 
DEBU[0007] Pod "auth-depl-684dffdc59-pfn2w" scheduled: checking container statuses 
DEBU[0007] Running command: [kubectl --context microk8s rollout status deployment auth-depl --namespace default --watch=false] 
DEBU[0007] Command output: [Waiting for deployment "auth-depl" rollout to finish: 0 of 1 updated replicas are available...
] 
DEBU[0007] Pod "auth-depl-684dffdc59-pfn2w" scheduled: checking container statuses 
DEBU[0007] Running command: [kubectl --context microk8s rollout status deployment auth-depl --namespace default --watch=false] 
DEBU[0008] Command output: [Waiting for deployment "auth-depl" rollout to finish: 0 of 1 updated replicas are available...
] 
DEBU[0008] Pod "auth-depl-684dffdc59-pfn2w" scheduled: checking container statuses 
DEBU[0008] Running command: [kubectl --context microk8s rollout status deployment auth-depl --namespace default --watch=false] 
DEBU[0008] Command output: [Waiting for deployment "auth-depl" rollout to finish: 0 of 1 updated replicas are available...
] 
DEBU[0008] Pod "auth-depl-684dffdc59-pfn2w" scheduled: checking container statuses 
 - deployment/auth-depl: waiting for rollout to finish: 0 of 1 updated replicas are available...
    - pod/auth-depl-684dffdc59-pfn2w: container auth is waiting to start: animalinstinct/auth:3c696c4506d3a14ff2b064ffbadad1d7b64b1ce69f0d13171c812e3fb289b263 can't be pulled
 - deployment/auth-depl failed. Error: waiting for rollout to finish: 0 of 1 updated replicas are available....
Cleaning up...
DEBU[0008] Running command: [kubectl --context microk8s create --dry-run=client -oyaml -f /home/alex/projects/microservices/ticketing/k8s/auth-depl.yaml -f /home/alex/projects/microservices/ticketing/k8s/ingress-srv.yaml] 
DEBU[0008] Command output: [apiVersion: apps/v1
kind: Deployment
metadata:
  name: auth-depl
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: auth
  template:
    metadata:
      labels:
        app: auth
    spec:
      containers:
      - image: animalinstinct/auth
        name: auth
---
apiVersion: v1
kind: Service
metadata:
  name: auth-srv
  namespace: default
spec:
  ports:
  - name: auth
    port: 3000
    protocol: TCP
    targetPort: 3000
  selector:
    app: auth
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/use-regex: "true"
  name: ingress-service
  namespace: default
spec:
  rules:
  - host: ticketing.dev
    http:
      paths:
      - backend:
          serviceName: auth-srv
          servicePort: 3000
        path: /api/users/?(.*)
] 
DEBU[0008] Running command: [kubectl --context microk8s delete --ignore-not-found=true -f -] 
 - deployment.apps "auth-depl" deleted
 - service "auth-srv" deleted
 - ingress.networking.k8s.io "ingress-service" deleted
INFO[0008] Cleanup complete in 382.273161ms             
exiting dev mode because first deploy failed: 1/1 deployment(s) failed
alex@desktop:~/projects/microservices/ticketing$

Information

  • Skaffold version: v1.13.1
  • Operating system: Ubuntu 20 Focal Fossa
  • Docker version 19.03.11, build dd360c7
  • Contents of skaffold.yaml:
apiVersion: skaffold/v2alpha3
kind: Config
deploy:
  kubectl:
    manifests:
      - k8s/*
build:
  local:
    push: false
  artifacts:
    - image: animalinstinct/auth
      context: auth
      docker:
        dockerfile: Dockerfile
      sync:
        manual:
          - src: 'src/**/*.ts'
            dest: .

auth-depl.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: auth-depl
spec:
  replicas: 1
  selector:
    matchLabels:
      app: auth
  template:
    metadata:
      labels:
        app: auth
    spec:
      containers:
        - name: auth
          image: animalinstinct/auth
---
apiVersion: v1
kind: Service
metadata:
  name: auth-srv
spec:
  selector:
    app: auth
  ports:
    - name: auth
      protocol: TCP
      port: 3000
      targetPort: 3000

Steps to reproduce the behavior

  1. a clonable repository with the sample skaffold project
    [email protected]:AnimalInstinct/microservices-ticketing.git
  2. skaffold <command>
    skaffold dev -v=debug

Images list

alex@desktop:~/projects/microservices/ticketing$ docker images
REPOSITORY            TAG                                                                IMAGE ID            CREATED             SIZE
animalinstinct/auth   3c696c4506d3a14ff2b064ffbadad1d7b64b1ce69f0d13171c812e3fb289b263   3c696c4506d3        21 minutes ago      194MB
animalinstinct/auth   a6f6576-dirty                                                      3c696c4506d3        21 minutes ago      194MB
animalinstinct/auth   b15bc2ac7509ad7f00bd5bee5360c61af4bdecfdb54afbc20e319633082c2c33   b15bc2ac7509        25 minutes ago      194MB
node                  alpine                                                             0f2c18cef5d3        29 hours ago        117MB

kinbug prioritawaiting-more-evidence

All 8 comments

Hi @AnimalInstinct, thanks for opening up this issue. When cloning the repo and running skaffold dev on my own machine, it seems to work.

Here's my machine info:
Skaffold v1.13.1
OS: MacOS 10.15.6
Docker v19.03.12
I'm also running with minikube as my context

I'm not completely sure what could be causing this issue on your machine. It seems to error after this message
- pod/auth-depl-684dffdc59-pfn2w: container auth is waiting to start: animalinstinct/auth:3c696c4506d3a14ff2b064ffbadad1d7b64b1ce69f0d13171c812e3fb289b263 can't be pulled

It could be an issue with running skaffold with microk8s

@AnimalInstinct I don't have any experience with microk8s, but you might want to read through #3571 — some people noted that they had to run a separate microk8s.kubectl config step (https://github.com/GoogleContainerTools/skaffold/issues/3571#issuecomment-578566142).

@AnimalInstinct I don't have any experience with microk8s, but you might want to read through #3571 — some people noted that they had to run a separate microk8s.kubectl config step (#3571 (comment)).

Thanks @briandealwis , i've got this issue with config before and found that thread you recommended and yes it is resolved another issue, but at the moment it is not crossed, in this case something wrong with container deploying and i can't find any reason why.

Hi @AnimalInstinct, thanks for opening up this issue. When cloning the repo and running skaffold dev on my own machine, it seems to work.

Here's my machine info:
Skaffold v1.13.1
OS: MacOS 10.15.6
Docker v19.03.12
I'm also running with minikube as my context

I'm not completely sure what could be causing this issue on your machine. It seems to error after this message
- pod/auth-depl-684dffdc59-pfn2w: container auth is waiting to start: animalinstinct/auth:3c696c4506d3a14ff2b064ffbadad1d7b64b1ce69f0d13171c812e3fb289b263 can't be pulled

It could be an issue with running skaffold with microk8s

Hi @MarlonGamez , yep i have clonning and running it on my Windows machine and it works smooth too, and yes it is definitely crossed with Microk8s and Ubuntu.

@briandealwis Did you find any solution to this problem of yours, I guess the thing is that your last deployment with skaffold dev didnt clean up properly, although I am facing the same issue and running skaffold delete doesnt seem to work for me as well

@jugaldb please open a new issue as this issue doesn't mention clean up or skaffold delete.

No actually what I meant was I was facing the same issue as you were as mentioned in your error logs, but I don't know I re wrote my yaml file from scratch and it seemed to work, maybe some typo that I wasn't able to figure out

Just to be clear here, my issue was the same as yours on startup

@briandealwis

@AnimalInstinct I realized what the problem is here (and thank you @jugaldb for giving me a reason to look at this again).

In your skaffold.yaml you set build.local.push = false, causing Skaffold to build to your local daemon and to not push the image to a remote registry (in this case, animalinstinct/auth on Docker Hub). But microk8s does not have access to your local Docker daemon and so these built images are not found.

You can enable microk8s's built-in registry and deploy to it with skaffold dev --default-repo localhost:32000. And you can then remove the build.local.push.


You might ask: why was this working with kubectl? Your kubernetes resources are referencing animalinstinct/auth, which has an implicit tag of :latest. You likely have images on your Docker Hub for :latest.

Skaffold builds the configured images with a generated tag, and it rewrites the kubernetes resources to use these generated tags during deployment. You can see this in the debug output:

spec:
      containers:
      - image: animalinstinct/auth:3c696c4506d3a14ff2b064ffbadad1d7b64b1ce69f0d13171c812e3fb289b263
        name: auth

Because you configured Skaffold to not push the images, this tagged image was only available in the your local Docker daemon and not on Docker Hub, and so microk8s was unable to resolve the image.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

Morriz picture Morriz  Â·  3Comments

heroic picture heroic  Â·  4Comments

woodcockjosh picture woodcockjosh  Â·  4Comments

GeertJohan picture GeertJohan  Â·  3Comments

gbird3 picture gbird3  Â·  3Comments