Kops: Restrict IAM Roles permissions

Created on 13 Feb 2017  路  58Comments  路  Source: kubernetes/kops

According to this document, the permissions look like too open. Could we just grant the minimum privileges (and list all the necessary permissions)?

For Master, change the ec2, route53, and elasticloadbalancing to:

ec2:AllocateAddress
ec2:AssociateAddress
ec2:AssociateRouteTable
ec2:AttachInternetGateway
ec2:AuthorizeSecurityGroupEgress
ec2:AuthorizeSecurityGroupIngress
ec2:CreateInternetGateway
ec2:CreateNatGateway
ec2:CreateRoute
ec2:CreateRouteTable
ec2:CreateSecurityGroup
ec2:CreateSubnet
ec2:CreateTags
ec2:CreateVpc
ec2:CreateVolume
ec2:DescribeAddresses
ec2:DescribeAvailabilityZones
ec2:DescribeInstances
ec2:DescribeInternetGateways
ec2:DescribeKeyPairs
ec2:DescribeNatGateways
ec2:DescribeRegions
ec2:DescribeRouteTables
ec2:DescribeSecurityGroups
ec2:DescribeSubnets
ec2:DescribeVpcs
ec2:ModifySubnetAttribute
ec2:ModifyVpcAttribute
ec2:RevokeSecurityGroupEgress
ec2:RunInstances

route53:ChangeResourceRecordSets
route53:GetChange
route53:GetHostedZone
route53:ListHostedZones
route53:ListResourceRecordSets

elasticloadbalancing:CreateLoadBalancer
elasticloadbalancing:ConfigureHealthCheck
elasticloadbalancing:ModifyLoadBalancerAttributes
elasticloadbalancing:DescribeLoadBalancers
elasticloadbalancing:RegisterInstancesWithLoadBalancer
elasticloadbalancing:SetLoadBalancerPoliciesOfListener

For Node, update route53 to:

route53:ChangeResourceRecordSets
route53:GetChange
route53:GetHostedZone
route53:ListHostedZones
route53:ListResourceRecordSets

Those are simply borrowed from AWS policy settings for Tectonic, should be sufficient?

aresecurity lifecyclrotten

Most helpful comment

Proposal

BYO IAM roles. Need to scope if this would work, but it would allow a transition to allowing users to create there own IAM roles, and not have kops create roles. Keep the same functionality that we have now and add the advanced user option into the API / YAML.

Build json examples for each role, and document those examples.

Improvements

Once the use of your own roles is tested we can determine if we want to tighten down the permissions. I would recommend that we look at VPC level isolation for the nodes. Currently, if I understand the default IAM permissions, we are not reducing the master and node perms to just the VPC to which the cluster is deployed into.

Leg Work

So here is some leg work that I have completed. NOT tested.

Kops user / CLI permissions
Kops k8s master permissions
Kops k8s node permissions

All 58 comments

Related to #1577 #1871 #1870

BTW I believe master also need ability to remove ELBs although that probably should be scoped to the ones with KubernetesCluster tags.

Thanks for the information. Glad that you have done some awesome work to push this :)

We could add this to Master:

elasticloadbalancing:DeleteLoadBalancer

@ffjia have you tested this?

@chrislovecnm not yet, can kops support provision cluster with a existing IAM role?

@ffjia You can't yet pass in a role, but you can add policies. I know it's awkward for this use case, but it could probably still be used for testing.

@yissachar In that case, I need to add a bunch of actions of Effect: Deny? It's better if we could allow some permissions explicitly 馃槃

Yes, that's why I said it was awkward.

It isn't really intended for overriding the base permissions, more for adding extra permissions that aren't included by default.

Proposal

BYO IAM roles. Need to scope if this would work, but it would allow a transition to allowing users to create there own IAM roles, and not have kops create roles. Keep the same functionality that we have now and add the advanced user option into the API / YAML.

Build json examples for each role, and document those examples.

Improvements

Once the use of your own roles is tested we can determine if we want to tighten down the permissions. I would recommend that we look at VPC level isolation for the nodes. Currently, if I understand the default IAM permissions, we are not reducing the master and node perms to just the VPC to which the cluster is deployed into.

Leg Work

So here is some leg work that I have completed. NOT tested.

Kops user / CLI permissions
Kops k8s master permissions
Kops k8s node permissions

Here is another issue with good information: https://github.com/kubernetes/kops/issues/1577

It'd be great if kops updated the permissions based on some configuration information. Primarily I'm thinking that if someone selects a CNI, they probably don't need/want VPC modification policies. This would make kops better for organizations that have tight VPC control policies. (This of course would make it hard to switch between CNI and VPC routing, but there could be a flag there too).

There are probably other examples of things like this as well, but VPC creation and route53 modifications are things that frequently are in separate areas of an organization.

I going to keep this issue going, but I am going to start breaking it into short issues, which we can PR on.

Code / Documentation good ...

@fridiculous just worked through the s3 perms:

{ "Version": "2012-10-17", "Statement": [ { "Effect": "Deny", "Principal": "*", "Action": "s3:*", "Resource": [ "arn:aws:s3:::kops.dev.example.com", "arn:aws:s3:::kops.dev.example.com/*" ], "Condition": { "StringNotLike": { "aws:userId": [ "AIDAUSER1", --user1 "123456789", --root "AROAROLE1:*", --masters.dev.example.com "AROAROLE2:*", --nodes.dev.example.com ] } } } ] }

http://docs.aws.amazon.com/IAM/latest/APIReference/API_SimulateCustomPolicy.html was mentioned :)

Some feedback on a CF tmplate that is working:

{
  "AWSTemplateFormatVersion": "2010-09-09",
  "Description": "Custom masters Role",
  "Parameters": {
  },
  "Mappings": {
  },
  "Resources": {
    "MastersRole": {
      "Type": "AWS::IAM::Role",
      "Properties": {
        "RoleName": "cmasters.kops.lab",
        "AssumeRolePolicyDocument": {
          "Statement": [
            {
              "Effect": "Allow",
              "Principal": {
                "Service": [
                  "ec2.amazonaws.com"
                ]
              },
              "Action": [
                "sts:AssumeRole"
              ]
            }
          ]
        },
        "Path": "/"
      }
    },
    "MastersPolicy": {
      "Type": "AWS::IAM::Policy",
      "Properties": {
        "PolicyName": "cmasters.kops.lab",
        "Roles": [ { "Ref": "MastersRole" } ],
        "PolicyDocument": {
          "Version": "2012-10-17",
          "Statement": [
            {
              "Effect": "Allow",
              "Action": [
                "ecr:GetAuthorizationToken",
                "ecr:BatchCheckLayerAvailability",
                "ecr:GetDownloadUrlForLayer",
                "ecr:GetRepositoryPolicy",
                "ecr:DescribeRepositories",
                "ecr:ListImages",
                "ecr:BatchGetImage"
              ],
              "Resource": [
                "*"
              ]
            },
            {
              "Effect": "Allow",
              "Action": [
                "ec2:Describe*",
                "ec2:Attach*",
                "ec2:Detach*",
                "ec2:ModifyInstanceAttribute",
                "ec2:CreateRoute",
                "ec2:DeleteRoute",
                "ec2:*SecurityGroup*",
                "ec2:CreateTags"
              ],
              "Resource": [
                "*"
              ]
            },
            {
              "Effect": "Allow",
              "Action": [
                "elasticloadbalancing:*"
              ],
              "Resource": [
                "*"
              ]
            },
            {
              "Effect": "Allow",
              "Action": [
                "autoscaling:DescribeAutoScalingGroups",
                "autoscaling:DescribeAutoScalingInstances",
                "autoscaling:SetDesiredCapacity",
                "autoscaling:TerminateInstanceInAutoScalingGroup"
              ],
              "Resource": [
                "*"
              ]
            },
            {
              "Effect": "Allow",
              "Action": [
                "route53:ChangeResourceRecordSets",
                "route53:ListResourceRecordSets",
                "route53:GetHostedZone"
              ],
              "Resource": [
                "arn:aws:route53:::hostedzone/XXXXXXXX"
              ]
            },
            {
              "Effect": "Allow",
              "Action": [
                "route53:GetChange"
              ],
              "Resource": [
                "arn:aws:route53:::change/*"
              ]
            },
            {
              "Effect": "Allow",
              "Action": [
                "route53:ListHostedZones"
              ],
              "Resource": [
                "*"
              ]
            },
            {
              "Effect": "Allow",
              "Action": [
                "s3:*"
              ],
              "Resource": [
                "arn:aws:s3:::kops.lab/*kops.lab",
                "arn:aws:s3:::kops.lab/*kops.lab/*"
              ]
            },
            {
              "Effect": "Allow",
              "Action": [
                "s3:GetBucketLocation",
                "s3:ListBucket"
              ],
              "Resource": [
                "arn:aws:s3:::kops.lab"
              ]
            }
          ]
        }
      }
    }
  },
  "Outputs": {
  }
}
The nodes one:

{
  "AWSTemplateFormatVersion": "2010-09-09",
  "Description": "Custom nodes Role",
  "Parameters": {
  },
  "Mappings": {
  },
  "Resources": {
    "NodesRole": {
      "Type": "AWS::IAM::Role",
      "Properties": {
        "RoleName": "cnodes.kops.lab",
        "AssumeRolePolicyDocument": {
          "Statement": [
            {
              "Effect": "Allow",
              "Principal": {
                "Service": [
                  "ec2.amazonaws.com"
                ]
              },
              "Action": [
                "sts:AssumeRole"
              ]
            }
          ]
        },
        "Path": "/"
      }
    },
    "NodesPolicy": {
      "Type": "AWS::IAM::Policy",
      "Properties": {
        "PolicyName": "cnodes.kops.lab",
        "Roles": [ { "Ref": "NodesRole" } ],
        "PolicyDocument": {
          "Version": "2012-10-17",
          "Statement": [
            {
              "Effect": "Allow",
              "Action": [
                "ec2:Describe*"
              ],
              "Resource": [
                "*"
              ]
            },
            {
              "Effect": "Allow",
              "Action": [
                "ecr:GetAuthorizationToken",
                "ecr:BatchCheckLayerAvailability",
                "ecr:GetDownloadUrlForLayer",
                "ecr:GetRepositoryPolicy",
                "ecr:DescribeRepositories",
                "ecr:ListImages",
                "ecr:BatchGetImage"
              ],
              "Resource": [
                "*"
              ]
            },
            {
              "Effect": "Allow",
              "Action": [
                "route53:ChangeResourceRecordSets",
                "route53:ListResourceRecordSets",
                "route53:GetHostedZone"
              ],
              "Resource": [
                "arn:aws:route53:::hostedzone/XXXXXXXXXX"
              ]
            },
            {
              "Effect": "Allow",
              "Action": [
                "route53:GetChange"
              ],
              "Resource": [
                "arn:aws:route53:::change/*"
              ]
            },
            {
              "Effect": "Allow",
              "Action": [
                "route53:ListHostedZones"
              ],
              "Resource": [
                "*"
              ]
            },
            {
              "Effect": "Allow",
              "Action": [
                "s3:*"
              ],
              "Resource": [
                "arn:aws:s3:::kops.lab/*kops.lab",
                "arn:aws:s3:::kops.lab/*kops.lab/*"
              ]
            },
            {
              "Effect": "Allow",
              "Action": [
                "s3:GetBucketLocation",
                "s3:ListBucket"
              ],
              "Resource": [
                "arn:aws:s3:::kops.lab"
              ]
            }
          ]
        }
      }
    }
  },
  "Outputs": {
  }
}

Hey IAM gurus. Please take a peek and comment on https://github.com/kubernetes/kops/pull/2497/files#diff-55ece441df560384483b9d78ed8785fdR265

I am working on a PR to create IAM policies for

  • admin, someone that uses kops
  • master
  • node

With tightened permissions.

Here is where I am at. A kubernetes e2e test using these policies is passing. I have not tested ecr or autoscaling.

Master Policy:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "ec2:AttachVolume",
        "ec2:AuthorizeSecurityGroupIngress",
        "ec2:CreateTags",
        "ec2:CreateVolume",
        "ec2:CreateRoute",
        "ec2:CreateSecurityGroup",
        "ec2:DeleteSecurityGroup",
        "ec2:DeleteRoute",
        "ec2:DeleteVolume",
        "ec2:DescribeAvailabilityZones",
        "ec2:DescribeInstances",
        "ec2:DescribeRouteTables",
        "ec2:DescribeSubnets",
        "ec2:DescribeSecurityGroups",
        "ec2:DescribeVolumes",
        "ec2:DetachVolume",
        "ec2:ModifyInstanceAttribute",
        "ec2:RevokeSecurityGroupIngress"
      ],
      "Resource": [
        "*"
      ]
    },
    {
      "Effect": "Allow",
      "Action": [
        "elasticloadbalancing:AttachLoadBalancerToSubnets",
        "elasticloadbalancing:ApplySecurityGroupsToLoadBalancer",
        "elasticloadbalancing:CreateLoadBalancer",
        "elasticloadbalancing:CreateLoadBalancerPolicy",
        "elasticloadbalancing:CreateLoadBalancerListeners",
        "elasticloadbalancing:ConfigureHealthCheck",
        "elasticloadbalancing:DeleteLoadBalancer",
        "elasticloadbalancing:DeleteLoadBalancerListeners",
        "elasticloadbalancing:DescribeLoadBalancers",
        "elasticloadbalancing:DescribeLoadBalancerAttributes",
        "elasticloadbalancing:DetachLoadBalancerFromSubnets",
        "elasticloadbalancing:DeregisterInstancesFromLoadBalancer",
        "elasticloadbalancing:ModifyLoadBalancerAttributes",
        "elasticloadbalancing:RegisterInstancesWithLoadBalancer",
        "elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer"
      ],
      "Resource": [
        "*"
      ]
    },
    {
      "Effect": "Allow",
      "Action": [
        "autoscaling:DescribeAutoScalingGroups",
        "autoscaling:GetAsgForInstance",
        "autoscaling:SetDesiredCapacity",
        "autoscaling:TerminateInstanceInAutoScalingGroup",
        "autoscaling:UpdateAutoScalingGroup"
      ],
      "Resource": [
        "*"
      ]
    },
    {
      "Effect": "Allow",
      "Action": [
        "iam:ListServerCertificates",
        "iam:GetServerCertificate"
      ],
      "Resource": [
        "*"
      ]
    },
    {
      "Effect": "Allow",
      "Action": [
        "s3:*"
      ],
      "Resource": [
        "arn:aws:s3:::aws.k8spro.com",
        "arn:aws:s3:::aws.k8spro.com/*"
      ]
    },
    {
      "Effect": "Allow",
      "Action": [
        "s3:GetBucketLocation",
        "s3:ListBucket"
      ],
      "Resource": [
        "arn:aws:s3:::aws.k8spro.com"
      ]
    },
    {
      "Effect": "Allow",
      "Action": [
        "kms:Encrypt",
        "kms:Decrypt",
        "kms:ReEncrypt*",
        "kms:GenerateDataKey*",
        "kms:DescribeKey",
        "kms:CreateGrant",
        "kms:ListGrants",
        "kms:RevokeGrant"
      ],
      "Resource": [
        "*"
      ]
    },
    {
      "Effect": "Allow",
      "Action": [
        "ecr:GetAuthorizationToken",
        "ecr:BatchCheckLayerAvailability",
        "ecr:GetDownloadUrlForLayer",
        "ecr:GetRepositoryPolicy",
        "ecr:DescribeRepositories",
        "ecr:ListImages",
        "ecr:BatchGetImage"
      ],
      "Resource": [
        "*"
      ]
    },
    {
      "Effect": "Allow",
      "Action": [
        "route53:ChangeResourceRecordSets",
        "route53:ListResourceRecordSets",
        "route53:GetHostedZone"
      ],
      "Resource": [
        "arn:aws:route53:::hostedzone/Z151KI3YMRFBLY"
      ]
    },
    {
      "Effect": "Allow",
      "Action": [
        "route53:GetChange"
      ],
      "Resource": [
        "arn:aws:route53:::change/*"
      ]
    },
    {
      "Effect": "Allow",
      "Action": [
        "route53:ListHostedZones"
      ],
      "Resource": [
        "*"
      ]
    }
  ]
}

Node Policy:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "route53:ChangeResourceRecordSets",
        "route53:ListResourceRecordSets",
        "route53:GetHostedZone"
      ],
      "Resource": [
        "arn:aws:route53:::hostedzone/Z151KI3YMRFBLY"
      ]
    },
    {
      "Effect": "Allow",
      "Action": [
        "route53:GetChange"
      ],
      "Resource": [
        "arn:aws:route53:::change/*"
      ]
    },
    {
      "Effect": "Allow",
      "Action": [
        "route53:ListHostedZones"
      ],
      "Resource": [
        "*"
      ]
    },
    {
      "Effect": "Allow",
      "Action": [
        "ecr:GetAuthorizationToken",
        "ecr:BatchCheckLayerAvailability",
        "ecr:GetDownloadUrlForLayer",
        "ecr:GetRepositoryPolicy",
        "ecr:DescribeRepositories",
        "ecr:ListImages",
        "ecr:BatchGetImage"
      ],
      "Resource": [
        "*"
      ]
    },
    {
      "Effect": "Allow",
      "Action": [
        "ec2:DescribeInstances"
      ],
      "Resource": [
        "*"
      ]
    },
    {
      "Effect": "Allow",
      "Action": [
        "s3:*"
      ],
      "Resource": [
        "arn:aws:s3:::aws.k8spro.com",
        "arn:aws:s3:::aws.k8spro.com/*"
      ]
    },
    {
      "Effect": "Allow",
      "Action": [
        "s3:GetBucketLocation",
        "s3:ListBucket"
      ],
      "Resource": [
        "arn:aws:s3:::aws.k8spro.com"
      ]
    }
  ]
}

Admin installer policy:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "ec2:AllocateAddress",
        "ec2:AssociateAddress",
        "ec2:AssociateDhcpOptions",
        "ec2:AssociateRouteTable",
        "ec2:AttachInternetGateway",
        "ec2:AttachVolume",
        "ec2:AuthorizeSecurityGroupEgress",
        "ec2:AuthorizeSecurityGroupIngress",
        "ec2:CreateDhcpOptions",
        "ec2:CreateInternetGateway",
        "ec2:CreateNatGateway",
        "ec2:CreateKeyPair",
        "ec2:CreateRoute",
        "ec2:CreateRouteTable",
        "ec2:CreateSecurityGroup",
        "ec2:CreateSubnet",
        "ec2:CreateTags",
        "ec2:CreateVolume",
        "ec2:CreateVpc",
        "ec2:DeleteDhcpOptions",
        "ec2:DeleteInternetGateway",
        "ec2:DeleteKeyPair",
        "ec2:DeleteNatGateway",
        "ec2:DeleteRoute",
        "ec2:DeleteRouteTable",
        "ec2:DeleteSecurityGroup",
        "ec2:DeleteSubnet",
        "ec2:DeleteVolume",
        "ec2:DeleteVpc",
        "ec2:DeleteTags",
        "ec2:DescribeAddresses",
        "ec2:DescribeAvailabilityZones",
        "ec2:DescribeDhcpOptions",
        "ec2:DescribeHosts",
        "ec2:*DescribeImage*",
        "ec2:*DescribeInstances",
        "ec2:DescribeInternetGateways",
        "ec2:DescribeKeyPairs",
        "ec2:DescribeNatGateways",
        "ec2:DescribeRegions",
        "ec2:DescribeRouteTables",
        "ec2:DescribeSecurityGroups",
        "ec2:DescribeSubnets",
        "ec2:DescribeTags",
        "ec2:*DescribeVolume*",
        "ec2:DescribeVpcAttribute",
        "ec2:DescribeVpc",
        "ec2:DescribeVpcs",
        "ec2:DetachInternetGateway",
        "ec2:DetachVolume",
        "ec2:DisassociateRouteTable",
        "ec2:ImportKeyPair",
        "ec2:ModifyVpcAttribute",
        "ec2:ReleaseAddress",
        "ec2:ReplaceRoute",
        "ec2:RevokeSecurityGroupEgress",
        "ec2:RevokeSecurityGroupIngress",
        "ec2:RunInstances",
        "ec2:StopInstances",
        "ec2:TerminateInstances"
      ],
      "Resource": [
        "*"
      ]
    },
    {
      "Effect": "Allow",
      "Action": [
        "elasticloadbalancing:AddTags",
        "elasticloadbalancing:ApplySec*",
        "elasticloadbalancing:ConfigureHealthCheck",
        "elasticloadbalancing:CreateLoadBalancer",
        "elasticloadbalancing:CreateLoadBalancerListeners",
        "elasticloadbalancing:DescribeLoadBalancers",
        "elasticloadbalancing:DescribeTags",
        "elasticloadbalancing:DeleteLoadBalancer",
        "elasticloadbalancing:DescribeLoadBalancerAttributes",
        "elasticloadbalancing:ModifyLoadBalancerAttributes",
        "elasticloadbalancing:RegisterInstancesWithLoadBalancer",
        "elasticloadbalancing:RemoveTags",
        "elasticloadbalancing:SetSecurityGroups"
      ],
      "Resource": [
        "*"
      ]
    },
    {
      "Effect": "Allow",
      "Action": [
        "autoscaling:AttachInstances",
        "autoscaling:AttachLoadBalancers",
        "autoscaling:CreateAutoScalingGroup",
        "autoscaling:CreateLaunchConfiguration",
        "autoscaling:CreateOrUpdateTags",
        "autoscaling:DeleteAutoScalingGroup",
        "autoscaling:DeleteLaunchConfiguration",
        "autoscaling:DeleteTags",
        "autoscaling:Describe*",
        "autoscaling:SetDesiredCapacity",
        "autoscaling:TerminateInstanceInAutoScalingGroup",
        "autoscaling:UpdateAutoScalingGroup"
      ],
      "Resource": [
        "*"
      ]
    },
    {
      "Effect": "Allow",
      "Action": [
        "iam:AddRoleToInstanceProfile",
        "iam:CreateInstanceProfile",
        "iam:CreateRole",
        "iam:DeleteInstanceProfile",
        "iam:DeleteRole",
        "iam:DeleteRolePolicy",
        "iam:GetRole",
        "iam:GetRolePolicy",
        "iam:GetInstanceProfile",
        "iam:ListInstanceProfiles",
        "iam:ListRolePolicies",
        "iam:ListRoles",
        "iam:ListInstanceProfiles",
        "iam:PassRole",
        "iam:PutRolePolicy",
        "iam:RemoveRoleFromInstanceProfile",
        "iam:UpdateAssumeRolePolicy"
      ],
      "Resource": [
        "*"
      ]
    },
    {
      "Effect": "Allow",
      "Action": [
        "kms:Encrypt",
        "kms:Decrypt",
        "kms:ReEncrypt*",
        "kms:GenerateDataKey*",
        "kms:DescribeKey",
        "kms:CreateGrant",
        "kms:ListGrants",
        "kms:RevokeGrant"
      ],
      "Resource": [
        "*"
      ]
    },
    {
      "Effect": "Allow",
      "Action": [
        "route53:ChangeResourceRecordSets",
        "route53:ListResourceRecordSets",
        "route53:GetHostedZone"
      ],
      "Resource": [
        "arn:aws:route53:::hostedzone/Z151KI3YMRFBLY"
      ]
    },
    {
      "Effect": "Allow",
      "Action": [
        "route53:GetChange"
      ],
      "Resource": [
        "arn:aws:route53:::change/*"
      ]
    },
    {
      "Effect": "Allow",
      "Action": [
        "route53:ListHostedZones"
      ],
      "Resource": [
        "*"
      ]
    },
    {
      "Effect": "Allow",
      "Action": [
        "s3:PutObject",
        "s3:CreateBucket",
        "s3:DeleteBucket",
        "s3:DeleteObject",
        "s3:GetObject",
        "s3:GetBucketLocation",
        "s3:ListBucket"
      ],
      "Resource": [
        "arn:aws:s3:::aws.k8spro.com",
        "arn:aws:s3:::aws.k8spro.com/*"
      ]
    }
  ]
}

@faraazkhan and others ... comments?

@chrislovecnm Understood, we have not tested with ECR/Autoscaling yet, but I'd imagine the master node will also require autoscaling:DescribeLaunchConfigurations why not allow autoscaling:Describe* to the master nodes as well. Same thing with ECR a typical read only policy for ECR includes the ecr:DescribeImages action. See http://docs.aws.amazon.com/AmazonECR/latest/userguide/ecr_managed_policies.html

I am not seeing autoscaling:DescribeLaunchConfigurations in the autoscaling code, and I have not changed the ecr policies from what we have now. Let me double check, as we may be missing something in our current codebase.

As discussed @chrislovecnm

  1. kms should be locked down to the relevant resource (ideally use the cluster names resource).
  2. route53 changesets not required for nodes only masters.
  3. s3:* should be avoided and have explicit rules.
  4. use sids so we can reference sections and indicate the statements use.
  5. master missing elasticloadbalancing:AddTags + elasticloadbalancing:DescribeTags, elasticloadbalancing:RemoveTags

@ajohnstone awesome feedback. In regards to kms can you be more specific about the arn to use are a resource?

@ajohnstone

kms should be locked down to the relevant resource (ideally use the cluster names resource)

I have test adding the specific arn key to the kms permissions.

{
            "Effect": "Allow",
            "Action": [
                "kms:Encrypt",
                "kms:Decrypt",
                "kms:ReEncrypt*",
                "kms:GenerateDataKey*",
                "kms:DescribeKey",
                "kms:CreateGrant"
            ],
            "Resource": [
                "arn:aws:kms:us-east-1:xxx:key/xxx",
                "arn:aws:kms:us-east-1:xxx:key/xxx",
                "arn:aws:kms:us-east-1:xxx:key/xxx"
            ]
}

I removed "kms:ListGrants", "kms:RevokeGrant", but I need to test more. I am wondering if they are needed for the masters to mount encrypted volumes on the nodes.

_Question_

Not sure how to use the cluster name resources. Also noticed that we can maybe reduce the policy permission to just using a specific region. Do not want to go nuts with this, but I am wondering if we want to tweak more.

route53 changesets not required for nodes only masters.

Changed. Thanks

s3:* should be avoided and have explicit rules.

I have tested and trimmed down s3

        {
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:ListObject"
            ],
            "Resource": [
                "arn:aws:s3:::my-bucket/my-cluster-name",
                "arn:aws:s3:::my-bucket/my-cluster-name/*"
            ],
            "Sid": "kopsK8sStateStoreAccess"
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:GetBucketLocation",
                "s3:ListBucket"
            ],
            "Resource": [
                "arn:aws:s3:::myBucketName"
            ],
            "Sid": "kopsK8sStateStoreAccessList"
        }

use sids so we can reference sections and indicate the statements use.

Done

master missing elasticloadbalancing:AddTags + elasticloadbalancing:DescribeTags, elasticloadbalancing:RemoveTags

I tested Elastic load Balancers by hand, and the lack of those specific permissions does not seem to the server. We are adding tags via the create call, not a separate call, so at this point, I do not think we need to add the Tag permissions.

@faraazkhan I think we can prune down ECR permission more. The cloud provider code is getting a docker authentication token, and then is using the docker client. I cannot find any other API level calls.

Here are the latest that I have w/o ECR permissions, as I need an example w/o those permissions.

Master:
https://gist.github.com/chrislovecnm/0e23d11903cc36b99ccf73a013d5ae56

Node:
https://gist.github.com/chrislovecnm/a93941b4afbbd5afe746354af0cfddcc

In the master, I was wondering how many of these are essential-to-kubernetes vs needed for kops. For example, the route53 and KMS stuff look like they _could_ be kops or add-on specific. Ditto for autoscaling: (autoscaling contrib addon?).

KMS perms are for encrypted volumes. Route53 perms, autoscaling perms, s3 perms, and decribeinstnsce on the nodes are kops / autoscaler specific.

Keep this open, but the new policies are in place.

This change was causing lots of errors on my cluster:

"eventSource": "ec2.amazonaws.com", 
"eventName": "DescribeTags", 
"errorCode": "Client.UnauthorizedOperation",
"errorMessage": "You are not authorized to perform this operation.",

I had to manually add the "ec2:DescribeTags" action to kopsK8sEC2NodePerms when kops changed it from ec2:Describe* to just ec2:DescribeInstances (https://gist.github.com/chrislovecnm/a93941b4afbbd5afe746354af0cfddcc#file-iam-node-json).

I am not sure what exact part of kubernetes is calling DescribeTags but each node seems to be trying to get the tags of itself (based on the CloudWatch log showing that the user making the API call and the instance being described in the request parameters were the same).

The diff when doing an update cluster to bring up a new instance group shows the change. The cluster was created with kops 1.6.2 but I had to make the new instance group with kops 1.8.0 because we wanted ot use the new c5 instances ( btw, how is it that ec2 instance types are hard-coded and not just pulled in dynamically?? ):

  IAMRolePolicy/nodes.production-us-east-1.wink.com
    PolicyDocument
                            ...
                                "Statement": [
                                  {
                            +       "Sid": "kopsK8sEC2NodePerms",
                                    "Effect": "Allow",
                                    "Action": [
                            +         "ec2:DescribeInstances"
                            -         "ec2:Describe*"
                                    ],
                                    "Resource": [
                            ...

DescribeTags

Is a master calling this or a node? Is protokube running? kubelet should be the only thing on the nodes. @KashifSaadat you guys seeing these errors?

Also, @jonhosmer what version of k8s?

btw, how is it that ec2 instance types are hard-coded and not just pulled in dynamically

Different account types have different ec2 instance types, gov for example. If you have a better option, please PR it!! Or file an issue :) Another reason is that new instances often require code in both kops and upstream, for example, we had to add code for the C5s.

The cloudwatch logs say the call is coming from the worker node itself and trying to describe only its own tags (the master nodes might also be doing this but the IAM role for masters allows this by default so we didn't see any errors from any masters).

Each worker node has the following services enabled:

boot.automount                         enabled-runtime
[email protected]       enabled
kops-configuration.service             enabled
oem-cloudinit.service                  enabled
protokube.service                      enabled
[email protected]                  enabled-runtime
docker.socket                          enabled

A docker ps shows each node running containers flannel, kube-proxy, and protokube:

CONTAINER ID        IMAGE                                      COMMAND
904db95f9bbd        quay.io/coreos/flannel                     "/opt/bin/flanneld..."
44b5dde37735        gcr.io/google_containers/kube-proxy        "/bin/sh -c 'echo ..."
78e4f9d79ec0        quay.io/coreos/flannel                     "/bin/sh -c 'set -..."
87a20dbc965c        protokube:1.6.2                            "/usr/bin/protokub..."

what version of k8s?
v1.6.6

@jonhosmer are you running gossip or dns based kops? If dns can you stop the protokube service and see if the calls stop?

Regardless we see to be missing a new role from somewhere.

I'm afraid I've not seen this issue when using the tightened IAM policy.

@jonhosmer would you be able to provide your Cluster Spec (with sensitive info dropped), I'll try to recreate the issue and investigate.

@chrislovecnm dns (I think). The cluster was created with:

kops create cluster \
    --admin-access $ADMIN_ACCESS \
    --api-loadbalancer-type public \
    --cloud $CLOUD \
    --cloud-labels $CLOUD_LABELS \
    --image $CORE_OS_AMI \
    --kubernetes-version $K8S_VERSION \
    --master-size $MASTER_SIZE \
    --master-zones $MASTER_ZONES \
    --name $CLUSTER_NAME \
    --networking flannel \
    --ssh-public-key $SSH_PUBLIC_KEY \
    --topology public \
    --vpc $VPC \
    --zones $ZONES

I checked the new instance group I launched with kops 1.8.0 and it does not have a protokube service running. I just reapplied the security group change (remove Describe* add DescribeInstances) and I see authorization failures coming from nodes in that new instance group:

{
  "eventId": "XXXXX",
  "ingestionTime": 1513186652599,
  "logStreamName": "XXXXX_CloudTrail_us-east-1",
  "message": {
    "awsRegion": "us-east-1",
    "errorCode": "Client.UnauthorizedOperation",
    "errorMessage": "You are not authorized to perform this operation.",
    "eventID": "a1f72d69-d8af-4a27-a0e3-XXXXX",
    "eventName": "DescribeTags",
    "eventSource": "ec2.amazonaws.com",
    "eventTime": "2017-12-13T17:29:14Z",
    "eventType": "AwsApiCall",
    "eventVersion": "1.05",
    "recipientAccountId": "XXXXX",
    "requestID": "4b78cb55-2f6c-4108-a1af-XXXXX",
    "requestParameters": {
      "filterSet": {
        "items": [
          {
            "name": "resource-id",
            "valueSet": {
              "items": [
                {
                  "value": "i-XXXXXcdda9fd18518"
                }
              ]
            }
          }
        ]
      }
    },
    "responseElements": null,
    "sourceIPAddress": "XXX_NODE_PUBIC_IP_XXX",
    "userAgent": "Boto/2.46.1 Python/2.7.13 Linux/4.13.16-coreos-r2",
    "userIdentity": {
      "accessKeyId": "XXXXX",
      "accountId": "XXXXX",
      "arn": "arn:aws:sts::XXXXX:assumed-role/nodes.production-us-east-1-8.XXXXXXX.com/i-XXXXXcdda9fd18518",
      "principalId": "XXXXX:i-XXXXXcdda9fd18518",
      "sessionContext": {
        "attributes": {
          "creationDate": "2017-12-13T16:25:38Z",
          "mfaAuthenticated": "false"
        },
        "sessionIssuer": {
          "accountId": "XXXXX",
          "arn": "arn:aws:iam::XXXXX:role/nodes.production-us-east-1-8.XXXXX.com",
          "principalId": "XXXXX",
          "type": "Role",
          "userName": "nodes.production-us-east-1-8.XXXXX.com"
        }
      },
      "type": "AssumedRole"
    }
  },
  "timestamp": 1513186648938
}

@KashifSaadat Cluster spec attached production-us-east-1-8.XXXXXXX.com-spec.yaml.txt

...And I just noticed the "userAgent": "Boto/2.46.1 Python/2.7.13 Linux/4.13.16-coreos-r2", in the CloudTrail event log so maybe this is not a Kops issue after all... I am looking into what other pods I have running that could be using python/boto now.

Yup, this appears to be a DataDog issue, not kops, my apologies! I have a datadog daemonset running and appearantly the datadog agent uses the node's IAM instance profile to retrieve the tags:
https://github.com/DataDog/dd-agent/blob/master/utils/cloud_metadata.py#L178

Is there a way to modify the IAM role policy in the cluster spec or kops create flags?

If not, I will probably either disable this in the datadog config (https://github.com/DataDog/dd-agent/blob/9ae043ce49fb1384912977268aa413c4a0fd7f5b/datadog.conf.example#L54), or manually edit the role after any updates if we really want the instance tags in datadog.

Hey @jonhosmer, good job on debugging it :)

You can add additional IAM policies onto the master and node roles, a short guide is available here: https://github.com/kubernetes/kops/blob/master/docs/iam_roles.md#adding-additional-policies

Let me know if you have any issues with that.

Adding the additional policies worked like a charm, thank you! One thing to note though:

Upgrading an existing cluster will use the legacy IAM privileges to reduce risk of potential regression.

This does not appear to be the case. My cluster spec has legacy: true but is still updated with the new stricter IAM policies. Want me to make a separate issue for this?

A kops update .. will still show updates to the IAM roles, which may just be slight reordering and Statement IDs being added in. The permissions themselves should be mostly the same / identical.

If some permissions are being incorrectly dropped and are causing problems then yes please raise a new issue and feel free to assign me.

I am going to close this very long long long issue. We are code complete, and please file issues on any other IAM improvements/problems.

Hi. I moved to the new stricter IAM roles and had volume issues on the nodes (deployments that had to create volumes or needed to claim existing volumes did not work). Not sure if this was an oversight. Masters of course had everything they needed. For now will add the volume privileges to the additional policies in the cluster yaml. Maybe the ability to create and claim volumes should be considered as part of the "standard" cluster deployment?

Interesting. This seems to be a permission issue with the masters because of this condition on the AttachVolume action: ec2:ResourceTag/KubernetesCluster = MY_CLUSTER_NAME. I know that these strict policies are for cluster creation, but again, maybe the basic policies should allow volume attachment as it is a standard thing for K8s to do. Just a thought.

@michaelajr if I understand you correctly you are having pods on nodes attach volumes? First, you can accomplish this by adding an additional role to the nodes see https://github.com/kubernetes/kops/blob/master/docs/iam_roles.md

I get what you mean, but aren鈥檛 volume attach and detach usually handled by the masters? Seems like this is a perm needed by a custom application and not k8s. Please let me know if I am incorrect.

@chrislovecnm Hey. So I initially thought the nodes needed privileges - that was incorrect. As you stated the masters attach volumes. The issue is probably not a big deal. But here it is. The PersistentVolume object has an option to specify a pre-existing EBS volume:

kind: PersistentVolume
apiVersion: v1
metadata:
  name: MyVol
spec:
  capacity:
    storage: 32Gi
  accessModes:
  - ReadWriteMany
  awsElasticBlockStore:
    volumeID: vol-xxxxxxxx
  storageClassName: my-volume

But because the AttachVolume IAM action that kops creates for the masters has a condition on the volume being tagged with KubernetesCluster: CLUSTER_NAME, it will not attach without the tag. It took me a while to realize the issue was because of the action's condition. So folks either have to tag their pre-existihng volumes with the cluster they intend to use it with, or override all the volume IAM actions in the additionalPolicies field in the cluster YAML to not have any conditions. Hope that makes sense.

But because the AttachVolume IAM action that kops creates for the masters has a condition on the volume being tagged with KubernetesCluster: CLUSTER_NAME

Oh, good catch! Hrmmm wondering if we should remove the tag restriction.

@KashifSaadat thoughts about loosening the policy so that EBS volumes do not have to have the cluster tag? @michaelajr has a valid use case where existing volumes may not have those tags.

Although this is a valid use case I'd opt for secure by default approach. Good documentations and error logs should help debug use cases such as @michaelajr - which is valid, but so is deploying apps that need tons of other IAM perms, for which we have kube2iam.

On one hand, I think this is just part of what the masters should do out of the box. Just like being able to enter Route53 records or launching instances. But on the other... I also like that it is locked down to JUST the volumes created by the cluster. I.e., if you do not BYOV, then the PersistentVolume is created with the tag and claimed just fine. So maybe the 80/20 rule is fine here. As long as it is well documented that you'll need to loosen the restrictions when bringing your own. I would NOT tell folks to tag their volumes as Kops might delete it when the cluster is deleted.

Both good points. I'm not keen on opening AttachVolume for any volume, as the cluster could be in a shared AWS Account with other resources and so there's risk that you interfere with volumes used for other deployments.

The "secure by default" approach should be sufficient and then users have at least the following options to support this extra use-case (which we should ensure is appropriately documented):

  • Add an additional IAM policy within the Kops ClusterSpec to grant open access to AttachVolume call
  • Tag their volume with KubernetesCluster so it is associated with the Cluster (should test whether this means the volume is deleted as part of kops delete cluster.. as you've mentioned, and include in docs as a warning)

Tag their volume with KubernetesCluster so it is associated with the Cluster (should test whether this means the volume is deleted as part of kops delete cluster.. as you've mentioned, and include in docs as a warning)

Yah don't use that tag.

Wondering if we should add another tag name that we can use to enable this? We can document the tag name, os that a user can utilize that tag, and not run into deletion issues.

@chrislovecnm Yes I like that idea, roughly similar to the shared subnets tag approach (although kops sets the shared subnet tag itself, whereas here it wouldn't set a tag). Then kops can set up the permissions out of the box and users can add tags to opt-in for specific volumes to be accessible.

@jordanjennings actually maybe we can re-use the shared tags ...

K8s noob here;
Just ran into this issue today, during some testing of Docker Swarm to K8s migration. Found it was the Master IAM role being used, by decoding the authorization message, I saw via kubectl describe pod. Dug into the Master IAM role policy and saw the tag requirement. Would be nice to have the alternate tag option, so volumes can be attached by Master IAM role but are not deleted on cluster delete. Thanks everyone for the hard work you put in on this stuff! Kops v1.8.1 K8s v1.8.8

@chrislovecnm This is way old now, but is there a way to specify your own IAM role that kops will use cluster wide?

Related to this comment about BYO IAM https://github.com/kubernetes/kops/issues/1873#issuecomment-294966983

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

Well, here's my limited create/destroy policies if anyone is concerned about the permission issue, it works fine in my cases since I also encountered the issue with security teams not being able to land me that much of the permissions to create the clusters on production AWS.

// Creation

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "s3:GetLifecycleConfiguration",
                "s3:GetBucketTagging",
                "s3:GetBucketWebsite",
                "s3:GetBucketLogging",
                "s3:ListBucket",
                "s3:GetAccelerateConfiguration",
                "s3:GetBucketVersioning",
                "s3:GetReplicationConfiguration",
                "s3:PutObject",
                "s3:GetObjectAcl",
                "s3:GetObject",
                "s3:GetEncryptionConfiguration",
                "s3:PutBucketTagging",
                "s3:GetBucketRequestPayment",
                "s3:GetBucketCORS",
                "s3:GetObjectTagging",
                "s3:PutBucketAcl",
                "s3:PutObjectTagging",
                "s3:GetBucketLocation",
                "s3:PutObjectAcl",
                "s3:GetObjectVersion"
            ],
            "Resource": [
                "arn:aws:s3:::your_bucket/*",
                "arn:aws:s3:::your_bucket"
            ]
        },
        {
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": [
                "ec2:CreateDhcpOptions",
                "ec2:AuthorizeSecurityGroupIngress",
                "ec2:DescribeInstances",
                "ec2:CreateKeyPair",
                "route53:GetHostedZone",
                "iam:CreateRole",
                "ec2:AttachInternetGateway",
                "iam:PutRolePolicy",
                "iam:AddRoleToInstanceProfile",
                "ec2:AssociateRouteTable",
                "ec2:DescribeInternetGateways",
                "elasticloadbalancing:DescribeLoadBalancers",
                "ec2:CreateRoute",
                "ec2:CreateInternetGateway",
                "autoscaling:DescribeAutoScalingGroups",
                "ec2:DescribeVolumes",
                "route53:ListResourceRecordSets",
                "ec2:DescribeAccountAttributes",
                "autoscaling:UpdateAutoScalingGroup",
                "route53:UpdateHostedZoneComment",
                "ec2:DescribeKeyPairs",
                "elasticloadbalancing:DescribeInstanceHealth",
                "iam:ListRolePolicies",
                "ec2:DescribeNetworkAcls",
                "ec2:DescribeRouteTables",
                "elasticloadbalancing:RegisterInstancesWithLoadBalancer",
                "iam:GetRole",
                "route53:CreateHostedZone",
                "ec2:ImportKeyPair",
                "ec2:DescribeVpcClassicLinkDnsSupport",
                "ec2:CreateTags",
                "ec2:DescribeReservedInstancesOfferings",
                "ec2:ModifyNetworkInterfaceAttribute",
                "autoscaling:DescribeTags",
                "ec2:CreateRouteTable",
                "route53:ChangeResourceRecordSets",
                "ec2:RunInstances",
                "ec2:DescribeVpcClassicLink",
                "ec2:CreateVolume",
                "elasticloadbalancing:DescribeLoadBalancerAttributes",
                "elasticloadbalancing:AddTags",
                "route53:ChangeTagsForResource",
                "ec2:CreateSubnet",
                "ec2:AssociateAddress",
                "ec2:DescribeSubnets",
                "elasticloadbalancing:ModifyLoadBalancerAttributes",
                "iam:GetRolePolicy",
                "autoscaling:CreateAutoScalingGroup",
                "iam:CreateInstanceProfile",
                "ec2:DescribeAddresses",
                "route53:GetChange",
                "ec2:CreateNatGateway",
                "ec2:DescribeInstanceAttribute",
                "elasticloadbalancing:ConfigureHealthCheck",
                "ec2:DescribeRegions",
                "autoscaling:DescribeLaunchConfigurations",
                "ec2:CreateVpc",
                "ec2:DescribeDhcpOptions",
                "ec2:DescribeVpcAttribute",
                "ec2:ModifySubnetAttribute",
                "iam:ListInstanceProfilesForRole",
                "iam:PassRole",
                "ec2:DescribeNetworkInterfaces",
                "ec2:DescribeAvailabilityZones",
                "autoscaling:DescribeScalingActivities",
                "ec2:CreateSecurityGroup",
                "elasticloadbalancing:ApplySecurityGroupsToLoadBalancer",
                "ec2:ModifyVpcAttribute",
                "ec2:ModifyInstanceAttribute",
                "autoscaling:AttachLoadBalancers",
                "ec2:AuthorizeSecurityGroupEgress",
                "ec2:AssociateDhcpOptions",
                "elasticloadbalancing:CreateLoadBalancer",
                "elasticloadbalancing:AttachLoadBalancerToSubnets",
                "autoscaling:EnableMetricsCollection",
                "iam:GetInstanceProfile",
                "ec2:DescribeTags",
                "elasticloadbalancing:DescribeTags",
                "route53:ListHostedZones",
                "iam:ListRoles",
                "ec2:DescribeNatGateways",
                "iam:ListInstanceProfiles",
                "route53:ListTagsForResource",
                "ec2:AllocateAddress",
                "ec2:DescribeSecurityGroups",
                "elasticloadbalancing:CreateLoadBalancerListeners",
                "ec2:DescribeImages",
                "autoscaling:CreateLaunchConfiguration",
                "elasticloadbalancing:SetLoadBalancerPoliciesOfListener",
                "ec2:DescribeVpcs",
                "iam:GetUser"
            ],
            "Resource": "*"
        }
    ]
}

// Destroy

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "s3:DeleteObjectTagging",
                "s3:DeleteObjectVersion",
                "s3:DeleteObject"
            ],
            "Resource": [
                "arn:aws:s3:::your_bucket/*",
                "arn:aws:s3:::your_bucket"
            ]
        },
        {
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": [
                "ec2:DeleteSubnet",
                "ec2:ReplaceRouteTableAssociation",
                "iam:RemoveRoleFromInstanceProfile",
                "ec2:DeleteRouteTable",
                "elasticloadbalancing:DeleteLoadBalancer",
                "ec2:DeleteVolume",
                "ec2:RevokeSecurityGroupEgress",
                "iam:DeleteRolePolicy",
                "ec2:DeleteInternetGateway",
                "route53:DeleteHostedZone",
                "ec2:ReleaseAddress",
                "iam:DeleteInstanceProfile",
                "ec2:TerminateInstances",
                "ec2:DeleteRoute",
                "iam:DeleteRole",
                "ec2:DetachInternetGateway",
                "ec2:DisassociateRouteTable",
                "ec2:RevokeSecurityGroupIngress",
                "autoscaling:DeleteLaunchConfiguration",
                "ec2:DeleteSecurityGroup",
                "ec2:DeleteDhcpOptions",
                "ec2:DeleteNatGateway",
                "autoscaling:DeleteAutoScalingGroup",
                "ec2:DeleteVpc",
                "ec2:DeleteKeyPair"
            ],
            "Resource": "*" // Should be your cluster resources
        }
    ]
}

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

As I mentioned, my list was developed by verifying that I had no errors. My initial problems were with a fully open policy. I did this lock-down when I was making no progress.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

minasys picture minasys  路  3Comments

drewfisher314 picture drewfisher314  路  4Comments

lnformer picture lnformer  路  3Comments

thejsj picture thejsj  路  4Comments

chrislovecnm picture chrislovecnm  路  3Comments