This request asks for the ability to automate the set up of ingress authorisation for a Client VPN endpoint.
The recently added Client VPN support creates the endpoint but does not go so far as to allow automation of the authorisation rules to grant the client access to areas of the network. Authorisation for Client VPN is described in the AWS Client VPN Authorisation Rules documentation.
Authorization rules act as ο¬rewall rules that grant access to networks. You should have an authorization rule for each network for which you want to grant access.
This relates to the aws_ec2_client_vpn_endpoint resource, recently added to 1.58.0 by @slapula under #7009.
I overlooked this when I wrote the initial resource: https://docs.aws.amazon.com/sdk-for-go/api/service/ec2/#EC2.AuthorizeClientVpnIngress
I'm thinking we treat this like aws_security_group
and create a separate resource. Something like aws_ec2 client_vpn_authorization_rules
and then we define multiple ingress
and egress
attributes.
These are just my preliminary thoughts. I'm going to l dig into this more later today.
I think it will only be ingress blocks, but yes, that sounds sensible. Since there would only be one aws_ec2_client_vpn_authorization_rules
resource per Client VPN endpoint, I guess you could consider allowing the ingress blocks of attributes to be added to the aws_client_vpn_endpoint
resource but I assume the ingress/egress blocks were externalised from aws_security_group
resources for good reason, so perhaps the same applies here.
Now that I've reviewed this some more, I see what you are saying. Unfortunately, it seems that my initial take was incorrect and that this probably won't work as a separate resource. It uses the Client VPN Endpoint ID as its key identifier so that would make it hardish to track in state. Due to that fact, the best way to manage it would be to add this to aws_ec2_client_vpn_endpoint
. I'm just not sure how to best do that yet.
I have a solution for this and it should be ready in the next couple days.
Sounds good, thanks for looking into this!
+1
any movement on this one :) ?
As we create more and more endpoint auths for various cross account VPC/networks, this is quickly becoming an important Terraform feature for us.
Any news on this issue?
up !
Any news? Need any help?
π +1
Any update on this?
Any update please?
As a workaround, I solved this by using the AWS CLI command.
resource "null_resource" "client_vpn_ingress" {
provisioner "local-exec" {
when = "create"
command = "aws ec2 authorize-client-vpn-ingress --client-vpn-endpoint-id ${var.client_vpn_id} --target-network-cidr ${var.vpc_cidr} --authorize-all-groups"
}
provisioner "local-exec" {
when = "destroy"
command = "aws ec2 revoke-client-vpn-ingress --client-vpn-endpoint-id ${var.client_vpn_id} --target-network-cidr ${var.vpc_cidr} --revoke-all-groups"
}
}
Hi guys,
any update here. I read that a possible solution was ready in February .
What is blocking this one?
@jinil-kargo ,
Regarding:
As a workaround, I solved this by using the AWS CLI command.
One downside of the using null_resource
is that it doesn't assume the role configured in the provider.
Assume:
provider aws {
assume_role = {
role_arn = "arn:aws:iam::<account>:/role/<role-name>"
}
}
The null_resource
doesn't assume the role setup in provider, but uses the machine's profile.
@dostiharise thanks for sharing. I see it. It happens because the aws CLI is execute outside from Terraform context. May be we can find a way to export the AWS credential from the running terraform environment...It would be great.
@dostiharise thanks for sharing. I see it. It happens because the aws CLI is execute outside from Terraform context. May be we can find a way to export the AWS credential from the running terraform environment...It would be great.
I modified the command in the null resource to include the --profile
and --region
flags, and referenced the profile I used in my aws provider.
@dostiharise thanks for sharing. I see it. It happens because the aws CLI is execute outside from Terraform context. May be we can find a way to export the AWS credential from the running terraform environment...It would be great.
@zioalex
I wrote this script block to help with assuming roles:
resource "null_resource" "client_vpn_ingress" {
provisioner "local-exec" {
when = "create"
command = <<END_OF_COMMAND
ASSUME_ROLE_RESULT=$(aws sts assume-role --role-arn "${local.assume_role_arn}" --role-session-name "client-vpn-session" --duration-seconds 900 --query 'Credentials.[AccessKeyId,SecretAccessKey,SessionToken]' --output text)
if [ $? -ne 0 ]; then
echo "Failed to assume role" 1>&2; return 1;
else
export AWS_ACCESS_KEY_ID="`echo $ASSUME_ROLE_RESULT | awk '{ print $1 }'`"
export AWS_SECRET_ACCESS_KEY="`echo $ASSUME_ROLE_RESULT | awk '{ print $2 }'`"
export AWS_SESSION_TOKEN="`echo $ASSUME_ROLE_RESULT | awk '{ print $3 }'`"
# do create
aws ec2 authorize-client-vpn-ingress --client-vpn-endpoint-id ${aws_ec2_client_vpn_endpoint.client_vpn.id} --target-network-cidr ${local.vpc_cidr_block} --authorize-all-groups
fi
END_OF_COMMAND
}
}
You will need the following parameters for it to work:
local.assume_role_arn - The appropriate role to be assumed
local.vpc_cidr_block - The CIDR block to authorize
aws_ec2_client_vpn_endpoint.client_vpn.id - The client vpn endpoint's id
@dostiharise thanks for sharing. I see it. It happens because the aws CLI is execute outside from Terraform context. May be we can find a way to export the AWS credential from the running terraform environment...It would be great.
I modified the command in the null resource to include the
--profile
and--region
flags, and referenced the profile I used in my aws provider.
@kevinchabreck It's not just the profile, I was referring to assume role, which is different from profile credentials.
As follows:
provider aws {
assume_role = {
role_arn = "arn:aws:iam::<account>:/role/<role-name>"
}
}
This one is hard to solve with just --profile
and --region
flags.
This one is hard to solve with just
--profile
and--region
flags.
Can't you use a profile with a source_profile
and role_arn
?
https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-role.html
This one is hard to solve with just
--profile
and--region
flags.Can't you use a profile with a
source_profile
androle_arn
?https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-role.html
Wow! This is so much simpler.
How did I miss this. π€¦ββ
Thanks.
[Update]
Now, that I think about it, my above approach keeps much of the Role related Info within the terraform code, and reuses the Terraform Variables.
In a way using the terraform local variable for role name achieves "high cohesion", and avoids multiple sources of truth.
I was using something very similar to what @jinil-kargo proposed
As a workaround, I solved this by using the AWS CLI command.
resource "null_resource" "client_vpn_ingress" { provisioner "local-exec" { when = "create" command = "aws ec2 authorize-client-vpn-ingress --client-vpn-endpoint-id ${var.client_vpn_id} --target-network-cidr ${var.vpc_cidr} --authorize-all-groups" } provisioner "local-exec" { when = "destroy" command = "aws ec2 revoke-client-vpn-ingress --client-vpn-endpoint-id ${var.client_vpn_id} --target-network-cidr ${var.vpc_cidr} --revoke-all-groups" }
However, that's deprecated now.
Warning: External references from destroy provisioners are deprecated
Destroy-time provisioners and their connection configurations may only
reference attributes of the related resource, via 'self', 'count.index', or
'each.key'.
References to other resources during the destroy phase can cause dependency
cycles and interact poorly with create_before_destroy.
Any other ideas?
I was using something very similar to what @jinil-kargo proposed
As a workaround, I solved this by using the AWS CLI command.
resource "null_resource" "client_vpn_ingress" { provisioner "local-exec" { when = "create" command = "aws ec2 authorize-client-vpn-ingress --client-vpn-endpoint-id ${var.client_vpn_id} --target-network-cidr ${var.vpc_cidr} --authorize-all-groups" } provisioner "local-exec" { when = "destroy" command = "aws ec2 revoke-client-vpn-ingress --client-vpn-endpoint-id ${var.client_vpn_id} --target-network-cidr ${var.vpc_cidr} --revoke-all-groups" }
However, that's deprecated now.
Warning: External references from destroy provisioners are deprecated Destroy-time provisioners and their connection configurations may only reference attributes of the related resource, via 'self', 'count.index', or 'each.key'. References to other resources during the destroy phase can cause dependency cycles and interact poorly with create_before_destroy.
Any other ideas?
I think you can skip the destroy
provisioner.
Skip this:
provisioner "local-exec" {
when = "destroy"
command = "aws ec2 revoke-client-vpn-ingress --client-vpn-endpoint-id ${var.client_vpn_id} --target-network-cidr ${var.vpc_cidr} --revoke-all-groups"
}
The reason is when you drop/destroy the client_vpn
it kills all the ingress rules all together.
I am guessing the revoke-client-vpn-ingress
step is redundant and safe to skip.
But do keep the when = "create"
provisioner as is.
Do let me know if what I propose works!
Wish you luck.
Any update on when a solution for this issue will be merged in to the main "aws_ec2_client_vpn_endpoint" resource?
I see every few days someone makes a PR to address this issue.
+1 π
+1 π
Still no update?
+1 π
+1
Here's what's working for us right now with TF 12:
variable subnets {
type = set(string)
}
variable ad_ingress {
type = map(object({
cidr = string
ad_group_id = string
}))
}
locals {
stack = {
Resources = merge(local.ingress_custom, local.ingress_custom_routes)
}
ingress_custom = {
for k, v in var.ad_ingress : k => {
Type = "AWS::EC2::ClientVpnAuthorizationRule"
Properties = {
ClientVpnEndpointId = aws_ec2_client_vpn_endpoint.vpn.id
AccessGroupId = v.ad_group_id
TargetNetworkCidr = v.cidr
Description = format("%s Access, (%s)", k, v.cidr)
}
}
}
ingress_custom_routes = merge(flatten([[
for subnet in var.subnets : {
for k, v in var.ad_ingress : "${k}Route${split("-",subnet)[1]}" => {
Type = "AWS::EC2::ClientVpnRoute"
Properties = {
ClientVpnEndpointId = aws_ec2_client_vpn_endpoint.vpn.id
TargetVpcSubnetId = subnet
DestinationCidrBlock = v.cidr
Description = format("%sRoute-%s", k, subnet)
}
}
}
]])...)
}
with a var or local var defined like:
locals {
ad_ingress = {
internet_for_users = {
cidr = "0.0.0.0/0"
ad_group_id = "asdfg-hjkl-qwert-tyuuiop"
}
}
}
And then use the local.stack variable in a cloudformation stack resource:
resource aws_cloudformation_stack client_vpn {
name = "ClientVPN-${var.name}"
template_body = jsonencode(local.stack)
}
and it'll create the routes and auth rule, that's if you don't mind creating cloudformation stacks.
Β―\_(γ)_/Β―
The local-exec step does the same thing as the above and is much simpler to maintain.
The day I start using CloudFormations again is the day AWS invents a CloudFormation template to stop Werner Vogels from sweating like a pig on the stage of AWS reInvent.
@MartiUK you da real mvp
+1
I have a solution for this and it should be ready in the next couple days.
When?
I have a solution for this and it should be ready in the next couple days.
When?
@slapula update please??
Might have to use a ec2 instance with openvpn instead... Probably be cheaper as well.
If running kubernetes maybe just helm install openvpn?
+1
Hi all,
I see that this issue is still interesting for many. You can use local exec blocks to bypass the missing terraform components by using aws cli commands. Here is a example for you which works actually fine for me:
resource "null_resource" "client-vpn_ingress" {
depends_on = ["aws_ec2_client_vpn_endpoint.vpn-endpoint"]
provisioner "local-exec" {
when = create
command = "aws ec2 authorize-client-vpn-ingress --client-vpn-endpoint-id ${aws_ec2_client_vpn_endpoint.vpn-endpoint.id} --target-network-cidr 0.0.0.0/0 --authorize-all-groups"
}
provisioner "local-exec"{
when = destroy
command = "aws ec2 revoke-client-vpn-ingress --client-vpn-endpoint-id ${aws_ec2_client_vpn_endpoint.vpn-endpoint.id} --target-network-cidr 0.0.0.0/0 --revoke-all-groups"
}
lifecycle {
create_before_destroy = true
}
}
For more informations about this commands read the docu.
@donsamiro The main issue I see with running local-exec
(for any resource, not just this one) is that there's no drift detection, so the usefulness is only with creating/destroying. It does provide some non-zero value, but it's not enough in all cases.
Additionally, depending on how you're authenticating when running Terraform, the credentials used for the main Terraform thread might not be available on the local-exec
thread. Also, if you need to to maintain compatibility between human users and service users (e.g. Terraform Cloud, CircleCI, Jenkins, etc.), the authentication mechanism might not be the same, which could complicate things.
It's a good suggestion though! It's definitely not nothing, and it will be useful for some use cases.
I'd also point out that the local-exec workarounds will break in tf 13 due to non-self resource references in the destroy provisioner: https://github.com/hashicorp/terraform/blob/master/CHANGELOG.md
I found @MartiUK 's solution to be the best until there's official resources. I have it working in terraform12.
It allows me to use aws providers with assume-role and not have to deal with how to get the permissions set on the AWS CLI for an MFA'd assume-role.
It works beautifully for create/update/destroy with clean plan output. I feel like it can easily be adapted to first-class resources using the same data structure in the future.
I can't wait for first-class resources to finish out AWS VPN IaC but this pattern is beautiful and I may use it for other resources not yet available in Terraform.
Agreed! I ended up not going in that direction though, but for unrelated reasons.
I haven't seen that pattern anywhere else and it's great (although certainly with some complexity), and @MartiUK if you have any publishing outlet like Medium or a personal/company blog, I would very much encourage you to make a post about it.
One thing to keep in mind though, is that it may make it harder to transition away from using aws_cloudformation_stack
as a way to implement unsupported resources into native Terraform resources once they're supported. I haven't used CloudFormation in a while, but if I remember correctly, you can't "detach" a resource from a stack (i.e. the CloudFormation equivalent of terraform state rm
), so even though you could import the resulting resources into a Terraform state, your stack might end up with drift, and with that comes a risk of somebody or something accidentally making changes with unintended consequences.
Just a thought!
This has been released in version 2.70.0 of the Terraform AWS provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading.
For further feature requests or bug reports with this functionality, please create a new GitHub issue following the template for triage. Thanks!
I'm going to lock this issue because it has been closed for _30 days_ β³. This helps our maintainers find and focus on the active issues.
If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!
Most helpful comment
any movement on this one :) ?