Skaffold: Order of deployers doesn't match order listed in `skaffold.yaml`

Created on 2 Apr 2020  Â·  22Comments  Â·  Source: GoogleContainerTools/skaffold

Expected Behavior

I have a skaffold.yaml file that contains the lines:

apiVersion: skaffold/v2alpha3
kind: Config

metadata:
  name: discover

profiles:
  - name: local
    activation:
      - kubeContext: minikube
    build:
      local:
        useBuildkit: true
        push: false
      artifacts:
        - blah
    deploy:
      kubectl:
        manifests:
          - create-discover-app-namespace.json
      helm:
        releases:
          - name: dev
            chartPath: ./kubernetes/discover-app/
            etc: "etc"

Because helm3 does not automatically create namespaces (anymore/yet), I have a kubectl deployer that is intended to create the namespace for the subsequent helm deployer. (I also have in mind some other things I’d like to do with kubectl before helm runs - basically, I think there are other use cases besides creating namespaces.)

The kubectl deploy doesn’t always run first, then the helm deploy fails. Following is a very trimmed debug output showing that.

I expect the kubectl deployer to run to completion before the helm deployer runs.

I have not at all investigated the skaffold source to see what changes could be made, sorry. If this issue is a low priority I may get to that and augment this issue with more information.

Actual Behavior

See these logs; the helm deployer is running before the kubectl deployer and then nothing is working.

(minikube:discover-app)➜  discover git:(acaird/AT-233-env-vars) ✗ skaffold -v debug run
INFO[0000] Skaffold &{Version:v1.6.0-docs ConfigVersion:skaffold/v2beta1 GitVersion: GitCommit:b74e2f94f628b16a866abddc2ba8f05ce0bf956c GitTreeState:clean BuildDate:2020-03-25T00:09:12Z GoVersion:go1.14 Compiler:gc Platform:linux/amd64}
DEBU[0000] config version (skaffold/v2alpha3) out of date: upgrading to latest (skaffold/v2beta1)
INFO[0000] applying profile: local
DEBU[0000] overlaying profile on config for field Build
 [...]
INFO[0000] Using kubectl context: minikube
DEBU[0000] Using builder: local
DEBU[0000] Running command: [minikube docker-env --shell none]
DEBU[0000] Command output: [DOCKER_TLS_VERIFY=1
DOCKER_HOST=tcp://127.0.0.1:32769
DOCKER_CERT_PATH=/home/acaird/.minikube/certs
MINIKUBE_ACTIVE_DOCKERD=minikube
]
DEBU[0000] setting Docker user agent to skaffold-v1.6.0-docs
Generating tags...
 - discover-db-migrator -> DEBU[0000] Running command: [git describe --tags --always]
DEBU[0000] Running command: [git describe --tags --always]
 [...]
DEBU[0006] Running command: [helm version]
INFO[0006] Deploying with helm v3.1.2 ...
DEBU[0006] Executing template &{envTemplate 0xc0003fc900 0xc00003b7c0  } with environment map[COLORTERM:truecolor...
DEBU[0006] Running command: [helm --kube-context minikube get all --namespace discover-app dev]
Helm release dev not installed. Installing...
DEBU[0006] EnvVarMap: map[DIGEST:discover-db-migrator:95cbaac2cf5f353ff87ca131d242f305f0a4b449b330b9faec0f63173fa7ed73...
DEBU[0006] Executing template &{envTemplate 0xc00011f600 0xc000a560c0  } with environment map[COLORTERM:truecolor...
DEBU[0006] Executing template &{envTemplate 0xc00062e000 0xc000a561c0  } with environment map[COLORTERM:truecolor...
 [...]
DEBU[0006] Executing template &{envTemplate 0xc00062f200 0xc000a574c0  } with environment map[COLORTERM:truecolor...
DEBU[0006] Executing template &{envTemplate 0xc00062f300 0xc000a57640  } with environment map[COLORTERM:truecolor...
DEBU[0006] Running command: [helm --kube-context minikube install dev ./kubernetes/discover-app/ --namespace discover-app --set-string api.image.fullName=discover-web-server:797b046c95f1d9ad8a088fe841c62f5f385d8e4fed27a6cdb914317c7d414419 [...]
coalesce.go:196: warning: cannot overwrite table with non table for nodeSelector (map[node_pool:association-node-pool])
coalesce.go:196: warning: cannot overwrite table with non table for nodeSelector (map[node_pool:association-node-pool])
Error: create: failed to create: namespaces "discover-app" not found
DEBU[0007] Running command: [helm --kube-context minikube get all --namespace discover-app dev]
WARN[0007] exit status 1
DEBU[0007] getting client config for kubeContext: ``
DEBU[0007] getting client config for kubeContext: ``
DEBU[0007] Running command: [kubectl version --client -ojson]
DEBU[0007] Command output: [{
  "clientVersion": {
    "major": "1",
    "minor": "17",
    "gitVersion": "v1.17.4",
    "gitCommit": "8d8aa39598534325ad77120c120a22b3a990b5ea",
    "gitTreeState": "clean",
    "buildDate": "2020-03-12T21:03:42Z",
    "goVersion": "go1.13.8",
    "compiler": "gc",
    "platform": "linux/amd64"
  }
}
]
DEBU[0007] Running command: [kubectl --context minikube create --dry-run -oyaml -f /home/acaird/git/discover/create-discover-app-namespace.json]
DEBU[0007] Command output: [apiVersion: v1
kind: Namespace
metadata:
  labels:
    name: discover-app
  name: discover-app
]
WARN[0007] image [discover-db-migrator] is not used by the deployment
 [...]
DEBU[0007] manifests with tagged images: apiVersion: v1
kind: Namespace
metadata:
  labels:
    name: discover-app
  name: discover-app
DEBU[0007] manifests with labels apiVersion: v1
kind: Namespace
metadata:
  labels:
    app.kubernetes.io/managed-by: skaffold-v1.6.0-docs
    name: discover-app
    skaffold.dev/builder: local
    skaffold.dev/cleanup: "true"
    skaffold.dev/deployer: kubectl
    skaffold.dev/docker-api-version: "1.40"
    skaffold.dev/run-id: 2bfe88b8-fe10-437f-8182-b82b55589ab2
    skaffold.dev/tag-policy: git-commit
    skaffold.dev/tail: "true"
  name: discover-app
DEBU[0007] 1 manifests to deploy. 1 are updated or new
DEBU[0007] Running command: [kubectl --context minikube apply -f -]
 - namespace/discover-app created
INFO[0007] Deploy complete in 688.331603ms
Waiting for deployments to stabilize
DEBU[0007] getting client config for kubeContext: ``
Deployments stabilized in 11.318892ms
You can also run [skaffold run --tail] to get the logs

Information

  • Skaffold Version:

    ~ skaffold version
    v1.6.0-docs
    

    and also:

    INFO[0000] Skaffold &{Version:v1.6.0-docs ConfigVersion:skaffold/v2beta1 GitVersion: GitCommit:b74e2f94f628b16a866abddc2ba8f05ce0bf956c GitTreeState:clean BuildDate:2020-03-25T00:09:12Z GoVersion:go1.14 Compiler:gc Platform:linux/amd64}
    
  • Operating System:

    ~ cat /etc/os-release
    NAME=Fedora
    VERSION="31 (Workstation Edition)"
    ID=fedora
    VERSION_ID=31
    VERSION_CODENAME=""
    PLATFORM_ID="platform:f31"
    PRETTY_NAME="Fedora 31 (Workstation Edition)"
    ANSI_COLOR="0;34"
    LOGO=fedora-logo-icon
    CPE_NAME="cpe:/o:fedoraproject:fedora:31"
    HOME_URL="https://fedoraproject.org/"
    DOCUMENTATION_URL="https://docs.fedoraproject.org/en-US/fedora/f31/system-administrators-guide/"
    SUPPORT_URL="https://fedoraproject.org/wiki/Communicating_and_getting_help"
    BUG_REPORT_URL="https://bugzilla.redhat.com/"
    REDHAT_BUGZILLA_PRODUCT="Fedora"
    REDHAT_BUGZILLA_PRODUCT_VERSION=31
    REDHAT_SUPPORT_PRODUCT="Fedora"
    REDHAT_SUPPORT_PRODUCT_VERSION=31
    PRIVACY_POLICY_URL="https://fedoraproject.org/wiki/Legal:PrivacyPolicy"
    VARIANT="Workstation Edition"
    VARIANT_ID=workstation
    
    ~ uname -a
    Linux primary 5.5.10-200.fc31.x86_64 #1 SMP Wed Mar 18 14:21:38 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
    

    See above for part of the skaffold.yaml file.

Steps to Reproduce the Behavior

I think that any example of a kubectl deployer (maybe a slow one, like create ns) followed by a helm deployer should show this. TBH I haven’t tried to create a minimal reproducible example, so let me know if that is needed.

aredeploy july-chill kinfeature-request prioritp2

Most helpful comment

@cpaika https://github.com/GoogleContainerTools/skaffold/pull/4765. this was just released with v1.15.0

All 22 comments

The deploy node is a map, and YAML does not maintain an order on map keys. And the underlying data structure is a Go struct with named fields. So we do not have any information on the order of the deployers.

The actual deployment code orders deployers by Helm, Kubectl, and finally Kustomize.

Two possibilities:

  1. Add an explicit mechanism to specify an order. For example:

      deploy:
         helm: ...
         kubectl: ...
         order: ["kubectl", "helm"]
    
  2. Turn the deploy into a sequence of deployers. This would allow us to chain operations, including doing things like first installing a CRD, then configuring a namespace, then install some charts with helm, and …

      deploy:
      - kubectl: ...
      - helm: ...
      - kubectl: ...
    

A third alternative is to address the specifics of this situation: have Skaffold be able to create and deploy into a namespace. dev could create and tear down the namespace, whereas run (and deploy?) would create and then leave it.

Not that I'm informed enough to make recommendations, but on the face of it I prefer Option 2, since that would be the most flexible, but I don't have any sense of the cost of it.

Option 1 would be my second choice.

Option 3 isn't really worth it to me, especially since helm3.2 is going to implement creation of namespaces, it looks like.

I also need this feature, because I need a CRD to be set up before I install resources of that kind. Adding namespace creation abilities to skaffold wouldn't help me. Either option 1 or 2 would be great.

Hey @briandealwis I'd like to chime in with support for the second option you proposed. I also have the need to setup a few CRDs before deploying resources and would like to be able to do so in an automated fashion using Skaffold. The current workaround is to just manually apply the CRDs before using Skaffold.

I'm not familiar with the Skaffold code base, would you be able to offer any insight on the difficulty of implementing something like this?

@RetWolf the model itself is found in pkg/skaffold/schema/latest/config.go as the types DeployConfig and DeployType. These deploy specs are turned into something executable in pkg/skaffold/runner/new.go.

Something that will be key here is determining when a CRD or deployable is ready. I guess for simplicity we could just make each stage wait for all deployable elements to be ready before proceeding to the next stage.

I've hacked together a proof of concept for staged deployments, with the API looking something like this:

deploy:
  kubectl:
    manifests:
      - ./metrics.yaml
  stages:
    - name: setup
      kubectl:
        manifests:
          - ./crd-definitions.yaml
    - name: custom-resources
      kubectl:
        manifests:
          - ./crd-manifest.yaml
    - name: default
      helm:
        # further helm deployment code

This would be backwards compatible with the existing top-level helm, kubectl, and kustomize options being ran before any existing stages, and pipelines can run without the existence of stages entirely making them an opt-in feature. (or vice-versa, with pipelines only using stages and no top-level deployers)

As you mentioned there is still the matter of determining when a CRD or deployable is ready. Before I continued working on implementing this functionality I wanted to ask if I should create a design proposal for these changes? While not breaking, it's an update to the deployment config that should probably be discussed.

@RetWolf, Thanks for jumping in

Not sure if we need detailed config for specifying dependencies.
Kubectl and kustomize natively support resource creation order.

The issue here, supporting deploy order amongst multiple skaffold deployers.
The solution you proposed might be overkill. WDYT?

I think, relying on deploy list order should suffice, unless you have a use case where this is not true.
Thanks
Tejal

@tejal29 Today I learned that kubectl and kustomize natively support resource creation order! Thanks for that link. I'd agree that the solution I came up with is a bit overkill, I'll have to take another look at my current config and implementing deploy order amongst multiple deployers.

@tejal29 I think the resource creation order is only true for kustomize? There's still the ordering of Skaffold's deployers (e.g., kubectl vs helm).

I am experiencing what I think is a similar issue.
My skaffold file includes a CRD creation (for Traefik). In order to have that custom resources available when skaffold will create resources based on them, I placed explicitly the manifest (for CRD) as first one. However skaffold (kubectl) exits with error, it is complaining that names of some resources are unknonwn (exactly the ones that I expect to be created by CRD yaml).

Whether I manually create the CRD with kubectl, I can run skaffold that completes without any error.

Am I falling in this issue?
Will the workaroud proposed by @RetWolf work for me?

I agree with @briandealwis that apparently order in kubectl references is referred to kustomize.

The latest release of kustomize, v3.6.1, doesn't appear to solve the CRD issue:

# kustomization.yaml
resources:
  - https://storage.googleapis.com/tekton-releases/pipeline/previous/v0.13.2/release.yaml
  - https://raw.githubusercontent.com/tektoncd/catalog/bf7e538778ce6c1ea8ebc72bd0d050362e63a716/git/git-clone.yaml
kustomize build | kubectl apply --filename -
...
error: unable to recognize "STDIN": no matches for kind "Task" in version "tekton.dev/v1beta1"

2645 was the impetus for respecting the order of the manifests as listed in the skaffold.yaml.

If we go forward with this, we could also add a cleanup flag to control whether to delete the referenced elements. This capability would be useful for creating persistent volume claims (#4366).

Do we really want to change the yaml syntax now?

  1. Add an explicit mechanism to specify an order. For example:
    deploy: helm: ... kubectl: ... order: ["kubectl", "helm"]
    I like approach 1 since even though it's explicit, it's backward compatible without much effort

I think there have been several themes where having multiple instances of the same deployer kind would be necessary (e.g., install a CRD, or configure a PV). Is it possible to have deploy take either an object/map or array?

For anyone who needs this before its patched in the next release, I came up with a simple workaround for local development. I simply swapped the order of Kubectl and Helm in the deployment code and built my own binary.

The configuration file now deploys the required CRDs first with kubectl, and then proceeds with the helm release:

    deploy:
      statusCheckDeadlineSeconds: 1300
      kubectl:
        manifests:
        - infrastructure/k1s-traefik/manifests/001-rbac.yaml
      helm:
        releases:
        - name: k1s-traefik
          namespace: k1s
          chartPath: infrastructure/k1s-traefik
          valuesFiles:
          - infrastructure/helm-values.yaml

Another workaround would be starting 2 profiles separately with 2 skaffold instances running, but this was a better workaround for my needs. I can try coming up with an implementation of the first approach:

  1. Add an explicit mechanism to specify an order. For example:
    deploy:
    helm: ...
    kubectl: ...
    order: ["kubectl", "helm"]

I am just not sure if it would be the best approach since I am very new to Skaffold and I can't give my own opinions right away.

+1 I need this too.

What about @briandealwis 's third option?

A third alternative is to address the specifics of this situation: have Skaffold be able to create and deploy into a namespace. dev could create and tear down the namespace, whereas run (and deploy?) would create and then leave it.

For my use case, i have a fresh minikube installation and I want the application to install itself into a specific namespace "foo". With Helm 3 there is the --create-namespace flag that will automatically create the namespace for you to deploy into.

Is there anyway we could support the field in the skaffold.yaml? I would love to have it automatically create a deployment namespace for me if it doesn't exist, rather than having devs have to manually kubectl create ns foo before running skaffold.

@cpaika https://github.com/GoogleContainerTools/skaffold/pull/4765. this was just released with v1.15.0

@nkubala Awesome! I'll give it a try

Was this page helpful?
0 / 5 - 0 ratings