Skaffold: with skaffold dev, the forwarded ports change on the code changes

Created on 18 Mar 2019  ·  13Comments  ·  Source: GoogleContainerTools/skaffold

Hi,

Adopting skaffold dev for java dev, I am experiencing an issue similar to #1594

Each time I make a change in my java code, all ports that are forwarded are incremented by 1.

Information

  • Skaffold version: v0.25.0
  • Operating system: Mac OS X 10.14.3 x86_64
  • Contents of skaffold.yaml:
apiVersion: skaffold/v1beta7
kind: Config
build:
  artifacts:
    - image: ijp/color-service
      jibGradle: {}

Steps to reproduce the behavior

  • When I run skaffold dev, I have: 8081 -> 8081
  • On the 1st change, I have: 8081 -> 8082
  • On the 2nd change, I have: 8081 -> 8083
  • ...

Br,
JP

areportforward kinbug

Most helpful comment

I was having the same issue. It was because the pod name was changing, because I was using a deployment. I switched to using a pod directly (so that the pod name was static) and now the port forwarding remains static as well.

However, it would be super useful if I could use a Deployment, as it would make the local environment more closely match the production environment.

All 13 comments

Hey @jiraguha could you provide more information regarding your project (the contents of your k8s yamls would be helpful), or more specifically are pods recreated upon every code change?

We recently updated the port forward key to include pod name, so if a pod is being recreated upon every code change that would explain why this is happening.

I just started using skaffold yesterday and I'm running into the same issue.
Here is my deployment and service as spit out by kustomize, nothing fancy:

apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/component: ...
    app.kubernetes.io/name: ...
  name: ...
spec:
  ports:
  - name: http
    port: 3000
    protocol: TCP
    targetPort: http
  selector:
    app.kubernetes.io/component: ...
    app.kubernetes.io/name: ...
  type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app.kubernetes.io/component: ...
    app.kubernetes.io/name: ...
  name: ...
spec:
  selector:
    matchLabels:
      app.kubernetes.io/component: ...
      app.kubernetes.io/name: ...
  template:
    metadata:
      labels:
        app.kubernetes.io/component: ...
        app.kubernetes.io/name: ...
    spec:
      containers:
      - envFrom:
        - secretRef:
            name: secret-248g9bh45m
            optional: false
        image: ...
        name: ...
        ports:
        - containerPort: 3000
          name: http

For completeness skaffold.yaml:

apiVersion: skaffold/v1beta7
kind: Config
build:
  artifacts:
  - image: ...
deploy:
  kustomize:
    path: kubernetes/

Hi @priyawadhwa,

You can find all my project and the k8s here. You will see.. basically, It's something I have generated with kompose.

Yes the pods are recreated:

$ kubectl get pods
NAME                             READY   STATUS    RESTARTS   AGE
color-app-7f58776499-l48wf       1/1     Running   0          15s
color-mongodb-7db994bff4-57wdz   1/1     Running   0          15s

$ kubectl get pods
NAME                             READY   STATUS    RESTARTS   AGE
color-app-658fb54ff-v7bvd        1/1     Running   0          6s
color-mongodb-7db994bff4-57wdz   1/1     Running   0          1m

Yes maybe, the pod name should not be in the key as with java everything is rebuilt.

Br,
JP

I was having the same issue. It was because the pod name was changing, because I was using a deployment. I switched to using a pod directly (so that the pod name was static) and now the port forwarding remains static as well.

However, it would be super useful if I could use a Deployment, as it would make the local environment more closely match the production environment.

I was having the same issue.

Seems related to #1594

I can confirm this behavior exists in v0.26.0. I reverted to v.0.23.0 and the problem is indeed fixed on that version. Adding some tests around this would be great so this regression doesn't occur again.

Port forwarding during skaffold dev is a great value add for developers and can't really get more adoption from developer teams if this feature is broken. Too bad I'm having to stay on an older version for this to work at the moment.

Also running into this issue and wondering it's being considered a bug that will be fixed

I just removed the podname from the key in #2047 -- could someone please confirm if this fixes their issue? You can find instructions for installating the latest version of skaffold at HEAD here under Latest bleeding edge binary

I've just ran into the same issue with v0.28 on MacOS. I checked the latest build and this port changing issue seems to be fixed. 👍

One thing I've noticed with the latest build, though unrelated to this issue, is that my current file sync results in error for me:

Syncing 1 files for dl-org-api.dev:3a15582509b7144222b3685129694697a1ca8b7d37b31397801d86f13013e51e
INFO[0396] Copying files: map[test/request/request-test-helper.ts:/usr/src/dl-org/test/request/request-test-helper.ts] 
to dl-org-api.dev:3a15582509b7144222b3685129694697a1ca8b7d37b31397801d86f13013e51e 
WARN[0397] Skipping deploy due to sync error: copying files: 
Running [kubectl exec dl-org-api-7b87dcbcf6-rqnlq --namespace default -c dl-org-api -i 
-- tar xmf - -C / --no-same-owner]: stdout , 
stderr: error: unable to upgrade connection: container not found ("dl-org-api")
, err: exit status 1: exit status 1 

Looks like it's still referencing the original pod dl-org-api-7b87dcbcf6-rqnlq that is now in "Terminating" status after the update instead of referencing the latest running pod.

What is provided for
Sent from Yahoo Mail on Android

On Mon, May 6, 2019 at 7:22 PM, Dmitri Moorenotifications@github.com wrote:
I've just ran into the same issue with v0.28 on MacOS. I checked the latest build and this port changing issue seems to be fixed. 👍

One thing I've noticed with the latest build, though unrelated to this issue, is that my current file sync results in error for me:
Syncing 1 files for dl-org-api.dev:3a15582509b7144222b3685129694697a1ca8b7d37b31397801d86f13013e51e
INFO[0396] Copying files: map[test/request/request-test-helper.ts:/usr/src/dl-org/test/request/request-test-helper.ts] to dl-org-api.dev:3a15582509b7144222b3685129694697a1ca8b7d37b31397801d86f13013e51e
WARN[0397] Skipping deploy due to sync error: copying files: Running [kubectl exec dl-org-api-7b87dcbcf6-rqnlq --namespace default -c dl-org-api -i -- tar xmf - -C / --no-same-owner]: stdout , stderr: error: unable to upgrade connection: container not found ("dl-org-api")
, err: exit status 1: exit status 1

Looks like it's still referencing the original pod dl-org-api-7b87dcbcf6-rqnlq that is now in "Terminating" status after the update instead of referencing the latest running pod.


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.

Hmm... I'm no longer seeing the sync error reported above. Seems to be working OK after restart.

Thanks @demisx

Since this should be fixed in the next release, I'm going to go ahead and close this issue. If anyone continues to see any issues feel free to comment here and we can open it up again.

Was this page helpful?
0 / 5 - 0 ratings