Packer: User variables are still not allowing non-string values

Created on 1 Nov 2018  ยท  4Comments  ยท  Source: hashicorp/packer

This appears to be the same problem as issue #2510 which was reported as fixed in 2017. Running packer validate returns

variable data_disk_sizes: '' expected type 'string', got unconvertible type 'float64'

Running packer 1.3.2 on Windows 10.

Example code:

{
  "variables": {
    "os_disk_size": 128,
    ...
  },
  "builders": [
    {
      "type": "azure-arm",
      "os_disk_size_gb": "{{user `os_disk_size`}}",
      ...
    }
  ]
}

The same thing happens if trying to pass an array as a variable.

core question

Most helpful comment

Not sure if this is related to this issue, or if it warrants a new issue. I am having the exact same issue with amazon-ebsvolume:

{
    "description": "Create and format an EBS volume for Prometheus data",
    "min_packer_version": "1.1.2",
    "variables": {
        "volume_name": "prometheus-server-data",
        "aws_region": "ap-southeast-1",
        "subnet_id": "",
        "temporary_security_group_source_cidr": "0.0.0.0/0",
        "associate_public_ip_address": "true",
        "ssh_interface": "",
        "data_volume_size": "400"
    },
    "builders": [
        {
            "name": "prometheus-data",
            "instance_type": "t3.micro",
            "region": "{{user `aws_region`}}",
            "type": "amazon-ebsvolume",
            "subnet_id": "{{user `subnet_id`}}",
            "associate_public_ip_address": "{{user `associate_public_ip_address`}}",
            "ssh_interface": "{{user `ssh_interface`}}",
            "temporary_security_group_source_cidr": "{{user `temporary_security_group_source_cidr`}}",
            "source_ami_filter": {
                "filters": {
                    "virtualization-type": "hvm",
                    "architecture": "x86_64",
                    "name": "*ubuntu-xenial-16.04-amd64-server-*",
                    "block-device-mapping.volume-type": "gp2",
                    "root-device-type": "ebs"
                },
                "owners": [
                    "099720109477"
                ],
                "most_recent": true
            },
            "ssh_username": "ubuntu",
            "ebs_volumes": [
                {
                    "volume_type": "gp2",
                    "device_name": "/dev/sdf",
                    "delete_on_termination": false,
                    "volume_size": "{{user `data_volume_size`}}",
                    "tags": {
                        "Name": "{{user `volume_name`}}",
                        "Timestamp": "{{isotime \"2006-01-02 03:04:05\"}}"
                    }
                }
            ],
            "run_tags": {
                "Name": "{{user `volume_name` }}",
                "Timestamp": "{{isotime \"2006-01-02 03:04:05\"}}"
            }
        }
    ],
    "provisioners": [
        {
            "type": "shell",
            "inline": [
                "sudo mkfs -t ext4 /dev/nvme1n1"
            ]
        }
    ]
}

I get the error


1 error(s) decoding:

* cannot parse 'ebs_volumes[0].volume_size' as int: strconv.ParseInt: parsing "{{user `data_volume_size`}}": invalid syntax

All 4 comments

Just put double quotes around 128.

Not sure if this is related to this issue, or if it warrants a new issue. I am having the exact same issue with amazon-ebsvolume:

{
    "description": "Create and format an EBS volume for Prometheus data",
    "min_packer_version": "1.1.2",
    "variables": {
        "volume_name": "prometheus-server-data",
        "aws_region": "ap-southeast-1",
        "subnet_id": "",
        "temporary_security_group_source_cidr": "0.0.0.0/0",
        "associate_public_ip_address": "true",
        "ssh_interface": "",
        "data_volume_size": "400"
    },
    "builders": [
        {
            "name": "prometheus-data",
            "instance_type": "t3.micro",
            "region": "{{user `aws_region`}}",
            "type": "amazon-ebsvolume",
            "subnet_id": "{{user `subnet_id`}}",
            "associate_public_ip_address": "{{user `associate_public_ip_address`}}",
            "ssh_interface": "{{user `ssh_interface`}}",
            "temporary_security_group_source_cidr": "{{user `temporary_security_group_source_cidr`}}",
            "source_ami_filter": {
                "filters": {
                    "virtualization-type": "hvm",
                    "architecture": "x86_64",
                    "name": "*ubuntu-xenial-16.04-amd64-server-*",
                    "block-device-mapping.volume-type": "gp2",
                    "root-device-type": "ebs"
                },
                "owners": [
                    "099720109477"
                ],
                "most_recent": true
            },
            "ssh_username": "ubuntu",
            "ebs_volumes": [
                {
                    "volume_type": "gp2",
                    "device_name": "/dev/sdf",
                    "delete_on_termination": false,
                    "volume_size": "{{user `data_volume_size`}}",
                    "tags": {
                        "Name": "{{user `volume_name`}}",
                        "Timestamp": "{{isotime \"2006-01-02 03:04:05\"}}"
                    }
                }
            ],
            "run_tags": {
                "Name": "{{user `volume_name` }}",
                "Timestamp": "{{isotime \"2006-01-02 03:04:05\"}}"
            }
        }
    ],
    "provisioners": [
        {
            "type": "shell",
            "inline": [
                "sudo mkfs -t ext4 /dev/nvme1n1"
            ]
        }
    ]
}

I get the error


1 error(s) decoding:

* cannot parse 'ebs_volumes[0].volume_size' as int: strconv.ParseInt: parsing "{{user `data_volume_size`}}": invalid syntax

@lawliet89 that is not the same issue. Open a new issue and supply the requested information.

I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

Was this page helpful?
0 / 5 - 0 ratings