_This issue was originally opened by @joshuaspence as hashicorp/terraform#12947. It was migrated here as part of the provider split. The original body of the issue is below._
I've been working on adding name_prefix
to a bunch of AWS resources lately so that multiple environments can reside in the same stack. One problem that I've come across lately is that some AWS resources have quite a small limit on the length of the name
property. I raised this issue in hashicorp/terraform#12629 and was advised to create a new issue (CC @stack72). For example, the maximum length for an autoscaling group name is 32 characters. This means that the maximum length of the name_prefix
is a measly 6 characters.
One way that we could work around this would be to use truncate the name
so that it doesn't exceed the maximum length. So if the maximum length is 32 characters and I provide a name_prefix
which is 16 characters, I would only get 16 characters of randomness.
I'm happy to work on implementing a solution if we can agree on one.
@joshuaspence is this all it'd take?
diff --git a/aws/resource_aws_alb.go b/aws/resource_aws_alb.go
index d1652c68..44d77a2e 100644
--- a/aws/resource_aws_alb.go
+++ b/aws/resource_aws_alb.go
@@ -151,7 +151,7 @@ func resourceAwsAlbCreate(d *schema.ResourceData, meta interface{}) error {
} else {
name = resource.PrefixedUniqueId("tf-lb-")
}
- d.Set("name", name)
+ d.Set("name", name[:32])
elbOpts := &elbv2.CreateLoadBalancerInput{
Name: aws.String(name),
diff --git a/aws/validators.go b/aws/validators.go
index 7ea9873e..90ce3a39 100644
--- a/aws/validators.go
+++ b/aws/validators.go
@@ -191,9 +191,9 @@ func validateElbNamePrefix(v interface{}, k string) (ws []string, errors []error
"only alphanumeric characters and hyphens allowed in %q: %q",
k, value))
}
- if len(value) > 6 {
+ if len(value) > 32 {
errors = append(errors, fmt.Errorf(
- "%q cannot be longer than 6 characters: %q", k, value))
+ "%q cannot be longer than 32 characters: %q", k, value))
}
if regexp.MustCompile(`^-`).MatchString(value) {
errors = append(errors, fmt.Errorf(
I can't figure out a way to test this. I've tried (a) make build
in this repo, and also (b) copying the changes to vendor/github.com/terraform-providers/terraform-provider-aws/aws
in the Terraform repo proper, to no avail.
When this will be considered for a fix?
Hi folks,
@shatil due to the way PrefixedUniqueId is implemented, you will get conflicts. Here is the with and without simple truncation:
PrefixedUniqueId("this-is-my-alb-") # echoes "this-is-my-alb-20091110230000000000000001"
PrefixedUniqueId("this-is-my-alb-") # echoes "this-is-my-alb-20091110230000000000000002"
PrefixedUniqueId("this-is-my-alb-") # echoes "this-is-my-alb-20091110230000000000000003"
The only difference is the use of a monotonic counter, which increments the last bit.
Going with the truncation, you would get:
fmt.Println(PrefixedUniqueId("this-is-my-alb-")[:32]) # echoes "this-is-my-alb-20091110230000000"
fmt.Println(PrefixedUniqueId("this-is-my-alb-")[:32]) # echoes "this-is-my-alb-20091110230000000"
fmt.Println(PrefixedUniqueId("this-is-my-alb-")[:32]) # echoes "this-is-my-alb-20091110230000000"
Now, we could say that we want to get only the last bytes, but how many: the last digit (1, 2 or 3 in our case)? what if we have 10 usages?
You can have as many usage of the function as you want, for the same prefix (it does not make sense, but still is possible).
As @apparentlymart wrote in the initial issue, we can still limit the name_prefix
to a given value. However, many resources are not using the same max length:
Thus being said, it is kind of hard to manage this case of 32 chars in my opinion and we would need to chose an arbitrary number to truncate the string.
What I mean is that envname-myproject-component-
is a common case when you have a single AWS account managing 2 environments at least.
In this case, I could have prod-terraform-docker-
, and it's already 22 chars :/
At the moment, I'm not sure we can bring any solution to that, because of the really big limitation AWS gives here 🤷♂️
Can we have some solution to this? Just take whatever user provided as name_prefix, add randomness, then cut to 32 characters. Put documentation about that.
Since it's a limitation from aws side, nothing can be done about it. I better have terraform working at least somehow, then printing useless error message and failing to do what I expect.
Looking forward to this feature!
My current workaround is constraining my ALB prefixes to 6 characters. After all, 6 characters ought to be enough for anybody 😄
Is there a reason that the unique ID that's generated with a prefixed name must end with so many zeros? Is there a reason this unique ID is also in base 10?
Here's one that I'm using right now: 20171026194535768300000001
It looks like the following time stamp: 2017-10-26 19:45:35.7683 00000001
That 00000001
at the end looks like it's padding the time well beyond milliseconds. Is nanoseconds really needed? Could this get trimmed to, say, 3 numbers in order to handle the monotonic counter? That would remove 5 characters and allow security groups to have a prefix that's longer than 6 characters.
This number appears to only exist for Terraform's use. One could could change the base 10 number into base 64 and save several few bytes. BCvYYVkxTtuqRMB
is that same 26-digit number as above but saves 11 characters. If you were also to trim five of the 0
characters, you save even more: bVjsEVNIYdwx
is 12 bytes instead of 21 that's found in 201710261945357683001
. (converter)
Why use time? What about an internal number stored in terraform state that gets incremented every time terraform gets run. Additionally as noted above something of the form [a-zA-Z0-9]^n should cover even degenerate usecases without wrapping in only a handful of characters.
Just got bitten by this too... There are good suggestion above, about using a random number or a UUID, and just truncate the resulting name to 32 chars.
I think we all agree that this is AWS limitation and there is no good solution until AWS raised the size limit on that field. In the meantime, I would think that a imperfect solution, that is mentioned as a caveat in the docs, is better than leaving this open...
Thank you,
@melo I don't think this an AWS issue. 32 characters to name something seems reasonable, sure it could be longer but that's not really the problem. The point of name_prefix
is to give some human-readable information in the name, the rest is just for uniqueness within this given namespace. Uniqueness doesn't remotely require 26 characters, it should be shortened.
@jgerry I do agree that whatever is the limit, Terraform must enforce it, and limit the random part in a way that it is respected.
But I would prefer to have more than 32 chars from AWS also.
a 'works for me' level data provider to generate a name_prefix. Uniqueness is based on an encoded unix timestamp. With the alphabet provided it requires six characters (plus a 7th for a dash). You can pass a resource name of greater then 32-6-1 = 25 characters but the script will truncate (your name) to fit.
data "external" "namewithtimestamp" {
program = ["python", "module/path/here.py"]
query = {
name = "mynamehere"
}
}
resource "aws_thing" "thing" {
# name_prefix = "mynamehere"
name = "${data.external.namewithtimestamp.result.name_with_timestamp}"
}
#!/usr/bin/python
#https://stackoverflow.com/questions/561486/how-to-convert-an-integer-to-the-shortest-url-safe-string-in-python
import json, string, sys, time
ALPHABET = string.ascii_uppercase + string.ascii_lowercase + \
string.digits
ALPHABET_REVERSE = dict((c, i) for (i, c) in enumerate(ALPHABET))
BASE = len(ALPHABET)
SIGN_CHARACTER = '$'
def num_encode(n):
if n < 0:
return SIGN_CHARACTER + num_encode(-n)
s = []
while True:
n, r = divmod(n, BASE)
s.append(ALPHABET[r])
if n == 0: break
return ''.join(reversed(s))
def num_decode(s):
if s[0] == SIGN_CHARACTER:
return -num_decode(s[1:])
n = 0
for c in s:
n = n * BASE + ALPHABET_REVERSE[c]
return n
class TfExternal(object):
"""Wrap Terraform External provider."""
@staticmethod
def query_args(obj):
"""Load json object from stdin."""
return {} if obj.isatty() else json.load(obj)
@staticmethod
def out_json(result):
"""Print result to stdout."""
print(json.dumps(result))
def main(**kwargs):
encodedtimestamp = num_encode(int(time.time()))
datestring = {
"name_with_timestamp": "{name:.{namelen}}-{timestamp}".format(
timestamp = encodedtimestamp,
name = kwargs['name'],
namelen = 32 - 1 - len(encodedtimestamp)
)
}
return datestring
if __name__ == "__main__":
tfe = TfExternal()
args = tfe.query_args(sys.stdin)
sys.exit(tfe.out_json(main(**args)))
one nasty side effect of this is the name changes on each run, thus the resource is updated/recreated each time regardless of if the resource has any other changes.
@Ninir, sounds like we're pushing the fault on AWS for a not so great design for our prefix generator.
Somehow the prefix generator generates prefixes such as 20171227051737952600000003
where both the left-most data and the right-most data are relevant while everything in-between is utterly irrelevant.
This could be fixed by doing either $littleendian_id$timestamp
or $bigendian_timestamp$bigendian_id
(you could do $timestamp$bigendian_id
but it's a bit silly).
Now your 10th ID would be either 10201712270517379526
or 62597371507221710201
(20 chars).
Also as someone mentioned we're using the highly inefficient base10 to encode the timestamp and I believe we're using base16 to encode the ID. We could easily use base 36 (lower alpha-num) to get smaller yet very valid unique IDs (I wouldn't go past 36 as we don't know if there are other character restrictions).
Now your 7th object in a single run would have the ID 7201712270517379526
(20) or 1ipqxxrstg74w
(13chars) base 36. Your 7 millionth's 7000000201712270517379526
(25) or vnw1rumh99cg0048
(16).
Now say that things got a bit crazy and we've reached our max object in the run (100M), even then, just by changing bases we go from 99999999201712270517379526
(26) to ckc594rkl9ko8w48o
(17).
Now the prefix is much more flexible. With a more sane number of object (let's say we have < 10k) we go up to zg4ae8z5a
, our previous 26 chars prefix is now 9.
Also this doesn't even take into account the fact that we format the initial date in a poor way (human readable), because suddenly we go from 10201712270517379526
(20) to 1015143518579526
(16) for our 10th object. Our last object is now 9999999915143518579526
(22) instead of 99999999201712270517379526
(26), or in base 36 1mmfb7d4zazowwc
(15).
Using a system that doesn't use fully base10 to generate a prefix that can make sure of base 36 is an incredible waste of space.
Heck with that system if we wanted to get the latest ID generated with the current limit of 26 it would be zzzzzzzzzzzzzzzzzzzzzzzzzz
(26) -> 29098125988731518248886462202268842800600
say we haven't moved to timestamps over 10B yet, it would be the 290981259887315182488864622th object (2.91e26).
This system has a flaw though, when the timestamp MSB will go up (1514365900 being the current one, the next up would be 10000000000) then there is the possibility of a conflict between the first object of 10000000000 and the 11th of epoch. (For reference that's 11/20/2286 @ 5:46pm vs epoch). Other similar collisions could have happened way more frequently before (the previous one was in 2001, the one before in 1973 and everything else before happened during 1970) but are now practically impossible to hit (the following one will be in the year 5138). (Note, it isn't that I don't like terraform but I really hope that 300 years from now technology will be slightly different and those terraform ids will be less relevant)
Regardless of the particular implementation, it would be great to have at least some workaround for this, since it breaks a lot of terraform functionality.
It's a year later and this is a still a big issue.
Updates? Anyone? Bueller?
Any progress/update on this?
Looking forward to this feature!
My current workaround is constraining my ALB prefixes to 6 characters. After all, 6 characters ought to be enough for anybody 😄
Surely that must be irony. It's not uncommon to see prefixes such as prod-myapp
. The 6 chars limit would yield prod-m
.
Surely that must be irony
I'm pretty sure it is. In my case I want an environment prefix for alb names, but for a couple of my environments that prefix is 6 characters...
Dumb suggestion, but use epoch instead of spelling out the timestamp?
This was touched upon in the Google provider and they actually removed name_prefix
entirely in favour of using the random generator, but that's insane to me, as you end up with dozens of keepers
per resource if you want the resource to be generated again if a change happens. That should be invisible to the TF user and certainly not cluttering up the code, imo.
I don't dislike the idea of handing over the job of unique naming to the user, but it would be nice to have TF solve whether the name needs updating or not as it does currently with name_prefix
.
Could we make PrefixedUniqueId
call random_id
in the backend in order to generate its cryptographically unique suffix, and perhaps have a config variable that defines the length? I'd consider that a satisfactory alternative until a solution that makes random_id
better is offered.
Based on the current suggestions on this ticket and the linked ones, it seems like a viable path forwards should be:
Where N is 6-10 characters.
I would be happy to work on this if the maintainers are broadly in favour if such an approach?
Most helpful comment
It's a year later and this is a still a big issue.
Updates? Anyone? Bueller?