Azure-pipelines-tasks: KubernetesManifest error on final increment of canary

Created on 11 Dec 2019  Â·  11Comments  Â·  Source: microsoft/azure-pipelines-tasks

Type: Bug

Enter Task Name: KubernetesManifest

Environment

  • Server - Azure Pipelines

Issue Description

When using the task with the canary strategy, the final step is to make the canary the new stable version. The process tries to patch the spec.selector for the deployment:

originally created canary deployment yaml:

{
    "apiVersion": "apps/v1",
    "kind": "Deployment",
    "metadata": {
        "name": "my-service-canary",
        "namespace": "myservice",
        "labels": {
            "app": "my-service",
            "chart": "my-service-0.0.1",
            "azure-pipelines/version": "canary"
        },
        "annotations": {
            "azure-pipelines/version": "canary"
        }
    },
    "spec": {
        "replicas": 1,
        "selector": {
            "matchLabels": {
                "app": "my-service",
                "azure-pipelines/version": "canary"
            }
        },
        <snip>

attempted stable deployment yaml:

{
    "apiVersion": "apps/v1",
    "kind": "Deployment",
    "metadata": {
        "name": "my-service",
        "namespace": "myservice",
        "labels": {
            "app": "my-service",
            "chart": "my-service-0.0.1",
            "azure-pipelines/version": "stable"
        },
        "annotations": {
            "azure-pipelines/version": "stable"
        }
    },
    "spec": {
        "replicas": 2,
        "selector": {
            "matchLabels": {
                "app": "my-service",
                "azure-pipelines/version": "stable"
            }
        },
        <snip>

This results in the error:

The Deployment "my-service" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app":"my-service", "azure-pipelines/version":"stable"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable

According to the Kubernetes team (https://github.com/kubernetes/kubernetes/issues/50808), the spec.selector is immutable after creation, so I don't believe this task should be adding/modifying that selector as part of the canary rolllout process.

Task logs

Can be uploaded upon request.

Release bug

Most helpful comment

Looking for an update on this fix/rollout to Azure DevOps. Can't use the KubernetesManifest promote task at all due to this issue.. which means I can't use canary functionality.

All 11 comments

I think you are using SMI canary deployment, it works if you are doing deploying using SMI canary from starting. If you have already existing deployment (deployed with some other way) and if you are trying to use canary deployment with SMI then it will fail.

No, this is without an existing deployment. It fails with an actual readable error if a deployment already exists.

@yelob , If you apply same deployment yaml file, K8S don't throw any error. Can you delete existing deployment and try once again. if still you are able to repo, can you share the steps you have tried.

I’ve tried it several times, first deleting the existing deployment. If the
deployment exists, a different error occurs. I can upload logs and yaml
files after I scrub out some specifics.

On Thu, Dec 12, 2019 at 8:08 PM Vikranth Thati notifications@github.com
wrote:

@yelob https://github.com/yelob , If you apply same deployment yaml
file, K8S don't throw any error. Can you delete existing deployment and try
once again. if still you are able to repo, can you share the steps you have
tried.

—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/microsoft/azure-pipelines-tasks/issues/11954?email_source=notifications&email_token=AEALZE3KWQXDMIFGYPATZZLQYMDDFA5CNFSM4JZQFFNKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEGY2ZRA#issuecomment-565292228,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AEALZE5UNDFYNL6EVW3GW53QYMDDFANCNFSM4JZQFFNA
.

Here's a sample pipeline that fails:

stages:
- stage: mystage
  displayName: 'Deploy Canary'
  jobs:
  - deployment: deploy
    pool:
      vmImage: ubuntu-latest
    environment: foo.myservice
    strategy:
      canary:
        increments: [25,50]
        deploy:
          steps:
          - task: KubernetesManifest@0
            displayName: 'Bake manifest'
            name: bake
            inputs:
              action: bake
              namespace: 'myservice'
              helmChart: '$(Build.ArtifactStagingDirectory)/deploy/my-service-0.0.1.tgz'
              releaseName: 'my-service'
          - task: KubernetesManifest@0
            displayName: 'Deploy manifest'
            inputs:
              action: $(strategy.action)
              strategy: $(strategy.name)
              percentage: $(strategy.increment)
              trafficSplitMethod: smi
              baselineAndCanaryReplicas: 1
              kubernetesServiceConnection: 'my-kubernetes'
              namespace: 'myservice'
              manifests: $(bake.manifestsBundle)

Thanks @yelob , I will try and get back to you.

@yelob Can you share your complete debug logs of all the iterations? If you feel it contains sensitive data, you can share it on [email protected]?

I emailed the logs with the issue number in the subject.

The issue has been identified and fixed. The fix will be rolled out to all the customers in about three weeks.

Looking for an update on this fix/rollout to Azure DevOps. Can't use the KubernetesManifest promote task at all due to this issue.. which means I can't use canary functionality.

The deployment is in progress and the changes will be rolled out by early next week. If you can share your account details to [email protected], we can provide a timeline by which you can expect the deployment to reach your ring.

Was this page helpful?
0 / 5 - 0 ratings