Terraform-provider-aws: resource/aws_codepipeline: Crash when missing artifact_store or using multiple artifact stores

Created on 15 Feb 2019  ·  14Comments  ·  Source: hashicorp/terraform-provider-aws

_This issue was originally opened by @MarkAyliffe as hashicorp/terraform#20355. It was migrated here as a result of the provider split. The original body of the issue is below._


As title

Crash Output

crash.log

Expected Behavior

It should work without crashing like it has for the last couple of months!

Actual Behavior

Terraform crashes if I try terraform plan or terraform apply.

bug crash serviccodepipeline

Most helpful comment

Any idea on when the fix will get merged in? We have over 100 pipelines configured through terraform in one region alone.

All 14 comments

error which causing the crash,

2019-02-15T09:33:25.537Z [DEBUG] plugin.terraform-provider-aws_v1.59.0_x4.exe: panic: runtime error: invalid memory address or nil pointer dereference
2019-02-15T09:33:25.537Z [DEBUG] plugin.terraform-provider-aws_v1.59.0_x4.exe: [signal 0xc0000005 code=0x0 addr=0x10 pc=0x26638c2]
2019-02-15T09:33:25.537Z [DEBUG] plugin.terraform-provider-aws_v1.59.0_x4.exe: 
2019-02-15T09:33:25.537Z [DEBUG] plugin.terraform-provider-aws_v1.59.0_x4.exe: goroutine 862 [running]:
2019-02-15T09:33:25.537Z [DEBUG] plugin.terraform-provider-aws_v1.59.0_x4.exe: github.com/terraform-providers/terraform-provider-aws/aws.flattenAwsCodePipelineArtifactStore(...)
2019-02-15T09:33:25.537Z [DEBUG] plugin.terraform-provider-aws_v1.59.0_x4.exe:  /opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/terraform-providers/terraform-provider-aws/aws/resource_aws_codepipeline.go:230
2019-02-15T09:33:25.537Z [DEBUG] plugin.terraform-provider-aws_v1.59.0_x4.exe: github.com/terraform-providers/terraform-provider-aws/aws.resourceAwsCodePipelineRead(0xc00111a000, 0x3529180, 0xc00024f400, 0xc00111a000, 0x0)
2019-02-15T09:33:25.537Z [DEBUG] plugin.terraform-provider-aws_v1.59.0_x4.exe:  /opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/terraform-providers/terraform-provider-aws/aws/resource_aws_codepipeline.go:439 +0x362
2019-02-15T09:33:25.537Z [DEBUG] plugin.terraform-provider-aws_v1.59.0_x4.exe: github.com/terraform-providers/terraform-provider-aws/vendor/github.com/hashicorp/terraform/helper/schema.(*Resource).Refresh(0xc000e033b0, 0xc000efe050, 0x3529180, 0xc00024f400, 0xc0005ca638, 0x4d4301, 0x2ec8740)
2019-02-15T09:33:25.537Z [DEBUG] plugin.terraform-provider-aws_v1.59.0_x4.exe:  /opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/terraform-providers/terraform-provider-aws/vendor/github.com/hashicorp/terraform/helper/schema/resource.go:352 +0x167

Hi @MarkAyliffe 👋 Sorry you ran into this unexpected behavior here.

Could you please provide a little more detail about your CodePipeline configuration or any recent changes to it (potentially outside Terraform)? In particular, it would be helpful to know if your CodePipeline has cross-region actions enabled, which uses multiple artifact stores, and is not currently supported by the Terraform resource. We can prevent the panic, but will need to either implement the new CodePipeline feature or return an error in this situation.

References:

Apologies for my slow response @bflad

The problem seems to be intermittent. Some of my scripts do operate cross region and cross account. The latest case of the crash was in one of my CICD scripts. I was working on adding a Lambda Invoke action in a CodePipeline deploy stage. The action had been created OK by Terraform, but when I made an adjustment to the script (sorry I can't remember the detail), the crash occurred as before with both plan and apply. Manually deleting the Deploy stage in the AWS console cleared the problem and Terraform successfully re-created the missing stage.

This script does refer to an S3 bucket in a different region, but that was not part of the change in view when the crash happened. There is no cross-account activity in this script at present. I was doing the cross account deploy in the Lambda function, which was still manually created at that point.

Something else odd - seen this show up in different versions of the SDK. As in (python example here):
aws-cli/1.16.50 Python/3.7.1 Darwin/18.2.0 botocore/1.12.40
When looking at codepipelines I don't see artifact stores. I upgraded to:
aws-cli/1.16.109 Python/2.7.10 Darwin/18.2.0 botocore/1.12.99
And I DO see the artifact stores. SO Possibly a version bump on the SDK might help too?
@bflad SO it turns out people didn't manually change this - just got triggered. It's really really odd the behavior - there ARE multiple stages with input/output artifacts, but version bumping my local AWS CLI started making the root artifact store show up... NOW, does show up under a region... eg.

        "artifactStores": {
            "us-east-1": {
                "type": "S3",
                "location": "******"
            }
        }

Where-as in the earlier SDK, didn't show up with a region... possibly something server side going on?

This bit us pretty badly. Essentially, when a developer manually (using the AWS Console) changes the source used in the source stage of the pipeline, even though there is nothing cross-region in the pipeline, the pipeline is then changed to use

      "artifactStores": {
            "eu-west-2": {
                "type": "S3",
                "location": "codepipeline-eu-west-2-960317561111"
            }
        },

rather than the previous format as put in by terraform which is

      "artifactStore": {
            "type": "S3",
            "location": "codepipeline-eu-west-2-960317561111"
        },

We are unable to do a terraform plan or even a terraform destroy, due to the panic.

We have been working with the ability to change the source branch in the console for ages, and the first thought was that it was due to the updated terraform-provider-aws_v1.59.0. However, we then went back to the branch that had previously run through successfully, with terraform version 0.11.8 and terraform-provider-aws_v1.57.0 which had been in place at the time of the previous successful run, but still had same terraform panic.

I suspect something changed at the AWS end such that if you modify a pipeline using the console, the configuration is written out to support cross-region, even if there are no cross-region actions.

This bug needs to be bumped up the priority list, as it is a breaking change even with no changes to terraform versions.

So my pipline as generated by terraform has two input artifacts...

    stage {
        name = "Test"
        action {
            run_order = 1
            name = "application_api_other"
            category = "Build"
            owner = "AWS"
            provider = "CodeBuild"
            version = "1"
            input_artifacts = ["tests", "build"]
            output_artifacts = ["test_application_api_other"]
            configuration {
                ProjectName = "${element(aws_codebuild_project.tests.*.name, 0)}"
            }
        }
    }

This went okay, but when the pipeline ran, it complained that there was no "primary source". I updated the pipeline by hand which removed the error from the aws console when I run the pipeline, but now when I run terraform (plan or apply as per the OP) ...

!!!!!!!!!!!!!!!!!!!!!!!!!!! TERRAFORM CRASH !!!!!!!!!!!!!!!!!!!!!!!!!!!!

Terraform crashed! This is always indicative of a bug within Terraform.
A crash log has been placed at "crash.log" relative to your current
working directory. It would be immensely helpful if you could please
report the crash with Terraform[1] so that we can fix this.

When reporting bugs, please include your terraform version. That
information is available on the first line of crash.log. You can also
get it by running 'terraform --version' on the command line.

[1]: https://github.com/hashicorp/terraform/issues

!!!!!!!!!!!!!!!!!!!!!!!!!!! TERRAFORM CRASH !!!!!!!!!!!!!!!!!!!!!!!!!!!!

Only thing I could do was to delete the pipline and re-terraform, but still have the issue that my pipeline does not run

We debugged this on our side too and whenever a change is made in the ui, the json for the call to get deployment pipeline current state changes to this:

DEBUG: Response codepipeline/GetPipeline Details:
<removed some stuff, see below the format returned>
{
  "metadata": {
    "created": xxxx,
    "pipelineArn": "xxxx",
    "updated": xxxx
  },
  "pipeline": {
    "artifactStores": {
      "eu-west-2": {
        "location": "our.s3bucket.name",
        "type": "S3"
      }
    },

Stack trace:

panic: runtime error: invalid memory address or nil pointer dereference
2019-02-26T14:47:50.010+0100 [DEBUG] plugin.terraform-provider-aws_v1.60.0_x4: [signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0x31c663b]
2019-02-26T14:47:50.010+0100 [DEBUG] plugin.terraform-provider-aws_v1.60.0_x4: 
2019-02-26T14:47:50.010+0100 [DEBUG] plugin.terraform-provider-aws_v1.60.0_x4: goroutine 607 [running]:
2019-02-26T14:47:50.010+0100 [DEBUG] plugin.terraform-provider-aws_v1.60.0_x4: github.com/terraform-providers/terraform-provider-aws/aws.flattenAwsCodePipelineArtifactStore(...)
2019-02-26T14:47:50.010+0100 [DEBUG] plugin.terraform-provider-aws_v1.60.0_x4:  /opt/teamcity-agent/work/5d79fe75d4460a2f/src/github.com/terraform-providers/terraform-provider-aws/aws/resource_aws_codepipeline.go:230
<removed unneeded stuff>

We tracked down the issue to https://github.com/terraform-providers/terraform-provider-aws/blob/04340792ac1c2b355fb7143814a737de84f047e7/aws/resource_aws_codepipeline.go#L228

it assumes the artifactStore is a singular version present in the file, and in our case, after every edit in the UI, it is not.

Documentation in https://docs.aws.amazon.com/cli/latest/reference/codepipeline/update-pipeline.html seems ambivalent on which format is supported, but the UI right now always moves to multi-region from a single one.

Has this been fixed in 2.0?

Nope. PR is not merged yet.

Any idea on when the fix will get merged in? We have over 100 pipelines configured through terraform in one region alone.

The fix for this (multi-region handling in the aws_codepipeline resource) was merged as part of #12549, but GitHub was having issues at the time of merge so closing this manually.

This has been released in version 2.56.0 of the Terraform AWS provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading.

For further feature requests or bug reports with this functionality, please create a new GitHub issue following the template for triage. Thanks!

I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!

Was this page helpful?
0 / 5 - 0 ratings