Ingress-nginx: GCE LBC should use "rate" mode

Created on 7 Jan 2017  Â·  23Comments  Â·  Source: kubernetes/ingress-nginx

With utilization mode we can not use ILB and HTTP LB on the same IG.

If we have 3 IGs, one per zone but all same region, we can just set RPS=1 per instance on all IGs and it will balance proportional to size of IG.

@bprashanth

All 23 comments

Currently it just picks the GCE default (utilization), by leaving mode unspecified: https://github.com/kubernetes/ingress/blob/master/controllers/gce/backends/backends.go#L120

I thought about this for a while initially, and figured that:

  1. The mode doesn't matter within an IG anyway (which you already know, but for the benefit of other readers: http://stackoverflow.com/questions/40470982/whats-the-default-load-balancing-algorithm-in-glbc-l7-in-gke)
  2. As a user I'd like GCE to route requests away from loaded IGs in the multi-cluster case, i.e, I setup 2 GCE clusters each with their own IG and federate them with an Ingress. I assume GCE would gather the right IG level utilization metrics in this setup.

But your IG x HTTP LB sharing example sounds like a stronger case to flip the GCE default on our end, and the original reasoning was sort of flimsy to begin with.

Hmm, this is tricker than I imagined.

There are 2 problems:

  1. If we use RATE, we need to specify rps, which we need from the user (we currently just default UTILIZATION to 90%)
  2. A single IG CAN have many backend services pointing at it, but all the backend services must have the same balancing mode

For 1, we can add a BalancingMode section to the ingress.backend api. Basically something like:

type IngressBackend struct {
    // Specifies the name of the referenced service.
    ServiceName string

    // Specifies the port of the referenced service.
    ServicePort intstr.IntOrString

    // Specifies the balancing mode to apply across endpoints of this backend
    BalancingMode IngressBackendBalancingMode
}

type IngressBackendBalancingMode struct {
    // exactly one of the following must be set
    Rate *BalancingModeRate
    Utilization *BalancingModeUtilization
    Connections *BalancingModeConnections
}

type IngressBalancingModeRate struct {
  MaxRPS int64
  ...
}

type IngressBalancingModeConnections struct {
  MaxConns int64
}
...

If the balancing mode isn't specified, we assume UTILIZATION. If the user specifies RATE, they're allowed to share the same IG between an internal and external lb on GCE.

2 is harder. I think we need to impose a cluster wide restriction on mixing balancing modes. The problem is we're only allowed to place a node behind a single loadbalanced IG, and place an IG behind a single type of balancing mode. So if a cluster is using UTILIZATION, GLBC should ignore BalancingModeRate until the user deletes all backends with UTILIZATION, and vice-versa.

Alternatives:

  • Only create new Backend Services with RATE: we can't do this because the existing single loadbalanced IG is already in UTILIZATION. you can only flip one backend at a time, and you can't mix RATE and UTILIZATION, so basically once you use an IG with one backend that has a given balancing mode, you can only flip the balancing mode if you delete all backends referencing that IG.

  • Flip all old Backend services to RATE: Maybe we can provide a script that does this, but I'm wary of automating it because we need to recreate old Backend Services.

  • Create one IG per BalancingMode per zone: an instance can't be part of 2 loadbalanced IGs

Why offer a choice? For now utilization is utterly pointless. If you set
the RPS to 1 or 1000000, it will still get all traffic, since there's only
one IG. Choice is wrong here. Simpler is better.

On Wed, Jan 25, 2017 at 6:49 PM, Prashanth B notifications@github.com
wrote:

Hmm, this is tricker than I imagined.

There are 2 problems:

  1. If we use RATE, we need to specify rps, which we need from the user
    (we currently just default UTILIZATION to 90%)
  2. A single IG CAN have many backend services pointing at it, but all
    the backend services must have the same balancing mode

For 1, we can add a BalancingMode section to the ingress.backend api.
Basically something like:

type IngressBackend struct {
// Specifies the name of the referenced service.
ServiceName string

// Specifies the port of the referenced service.
ServicePort intstr.IntOrString

// Specifies the balancing mode to apply across endpoints of this backend
BalancingMode IngressBackendBalancingMode

}
type IngressBackendBalancingMode struct {
// exactly one of the following must be set
Rate *BalancingModeRate
Utilization *BalancingModeUtilization
Connections *BalancingModeConnections
}
type IngressBalancingModeRate struct {
MaxRPS int64
...
}
type IngressBalancingModeConnections struct {
MaxConns int64
}
...

If the balancing mode isn't specified, we assume UTILIZATION. If the user
specifies RATE, they're allowed to share the same IG between an internal
and external lb on GCE.

2 is harder. I think we need to impose a cluster wide restriction on
mixing balancing modes. The problem is we're only allowed to place a node
behind a single loadbalanced IG, and place an IG behind a single type of
balancing mode. So if a cluster is using UTILIZATION, GLBC should ignore
BalancingModeRate until the user deletes all backends with UTILIZATION,
and vice-versa.

Alternatives:

-

Only create new Backend Services with RATE: we can't do this because
the existing single loadbalanced IG is already in UTILIZATION. you can only
flip one backend at a time, and you can't mix RATE and UTILIZATION, so
basically once you use an IG with one backend that has a given balancing
mode, you can only flip the balancing mode if you delete all backends
referencing that IG.
-

Flip all old Backend services to RATE: Maybe we can provide a script
that does this, but I'm wary of automating it because we need to recreate
old Backend Services.
-

Create one IG per BalancingMode per zone: an instance can't be part of
2 loadbalanced IGs

—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/ingress/issues/112#issuecomment-275296075,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AFVgVLW7-vNz-BOJFgi_cDzs3Kb70BG8ks5rWAmlgaJpZM4LdSkM
.

Hmm, with a name like maxRPS I'd assumed it would reject more than the limit to safeguard existing connections, but some rudimentary testing shows that to not be the case.

BalancingMode seeems like a useful extension point for all lbs, but I suppose it isn't immediately important.

One caveat though, if the user has enabled (gce) autoscaling, the autoscaler will kick in when observed > desired rps and add an instance to the IG, so we should probably set it really high.

Hi, I spent a lot of time yesterday trying to understand the following message while trying to create an ingress of type gce:

[googleapi: Error 400: Validation failed for instance 'projects/my-project/zones/europe-west1-b/instances/gke-my-project-eu-default-pool-213xxxxx-g2vi': instance may belong to at most one load-balanced instance group., instanceInMultipleLoadBalancedIgs]

Googling that message only led me to the #google-containers slack channel, so I think posting it here it will help others to find this issue (I also have an ILB in front of that instance group).

If you have an GCE LB controller v 0.91 or higher, it should default to
RATE mode if possible (to be compatible with ILB).

We will be working on an upgrade script for people who are already in
UTILIZATION mode.

On Thu, Mar 16, 2017 at 2:03 AM, Patrick Decat notifications@github.com
wrote:

Hi, I spent a lot of time yesterday trying to understand the following
message while trying to create an ingress of type gce:

[googleapi: Error 400: Validation failed for instance 'projects/my-project/zones/europe-west1-b/instances/gke-my-project-eu-default-pool-213xxxxx-g2vi': instance may belong to at most one load-balanced instance group., instanceInMultipleLoadBalancedIgs]

Googling that message only led me to the #google-container slack channel,
so I think posting it here it will help others to find this issue (I also
have an ILB in front of that instance group).

—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/ingress/issues/112#issuecomment-286996305,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AFVgVKmh7HoBbpYmfMdRpBFItXJTGdgwks5rmPrPgaJpZM4LdSkM
.

The thing is I am on GKE and the GCE ingress controller is the one provided by the master which I cannot control.

I have no idea which version is deployed (master is on 1.5.3).

You will catch up in 1.6, I'm not sure if any 1.5.x will get an update to
0.9.1

On Thu, Mar 16, 2017 at 9:20 AM, Patrick Decat notifications@github.com
wrote:

The thing is I am on GKE and the GCE ingress controller is the one
provided by the master which I cannot control.

I have no idea which version is deployed (master is on 1.5.3).

—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/ingress/issues/112#issuecomment-287110873,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AFVgVId388ymXoM_xi6W6Qgn-quPsi2_ks5rmWFqgaJpZM4LdSkM
.

Hey @pdecat, k8s 1.5.3 ships with v0.9.1. As Tim mentions, that version sets RATE mode as default. https://github.com/kubernetes/ingress/blob/a5f8fe240cab265d50e82a57fbb584c376e4195c/controllers/gce/backends/backends.go#L162

I'm perplexed as to why you're seeing that event surface. It should only be appearing if your instance group has a non-RATE LB backend, but the presence of an ILB means that's impossible. Could you please provide a list of minimum steps to replicate?

Maybe it's because this cluster is several months old and was upgraded from 1.4.x.

When the master is upgraded, the GLBC is also upgraded. If you created the cluster at 1.4.x and had a LB with type=UTILIZATION, then it should have been impossible to create the ILB in the first place.

If you created your first ingress resource after upgrading master to 1.5.3, then there may be a problem with the controller. Any more info you could provide would be greatly appreciated. I'll be attempting to replicate as well.

The ILB was created in november.

It's the first GLB I am attempting to create on that cluster with the gce ingress controller.

After further investigation, it may have nothing to do about the RATE/UTILIZATION load balancing mode.

The backend service created by the GLBC is indeed using the RATE load balancing mode:

# gcloud compute backend-services describe --global k8s-be-30081--d9b10b849848d7bc
affinityCookieTtlSec: 0
backends:
- balancingMode: RATE
  capacityScaler: 1.0
  group: https://www.googleapis.com/compute/v1/projects/my-project/zones/europe-west1-b/instanceGroups/k8s-ig--d9b10b849848d7bc
  maxRate: 1
connectionDraining:
  drainingTimeoutSec: 0
creationTimestamp: '2017-03-20T07:34:45.486-07:00'
description: ''
enableCDN: false
fingerprint: d-LXfym7vns=
healthChecks:
- https://www.googleapis.com/compute/v1/projects/my-project/global/httpHealthChecks/k8s-be-30081--d9b10b849848d7bc
id: '2430887105935857258'
kind: compute#backendService
loadBalancingScheme: EXTERNAL
name: k8s-be-30081--d9b10b849848d7bc
port: 30081
portName: port30081
protocol: HTTP
selfLink: https://www.googleapis.com/compute/v1/projects/my-project/global/backendServices/k8s-be-30081--d9b10b849848d7bc
sessionAffinity: NONE
timeoutSec: 30

However, it looks like the GLBC is creating a new instance group:

# kubectl describe ingress portal-webfront
Name:                   portal-webfront
Namespace:              default
Address:
Default backend:        portal-webfront:80 (10.144.2.82:80)
Rules:
  Host  Path    Backends
  ----  ----    --------
  *     *       portal-webfront:80 (10.144.2.82:80)
Annotations:
Events:
  FirstSeen     LastSeen        Count   From                            SubObjectPath   Type            Reason  Message
  ---------     --------        -----   ----                            -------------   --------        ------  -------
  34m           34m             1       {loadbalancer-controller }                      Normal          ADD     default/portal-webfront
  34m           6m              22      {loadbalancer-controller }                      Warning         GCE     [googleapi: Error 400: Validation failed for instance 'projects/my-project/zones/europe-west1-b/instances/gke-my-project-eu-default-pool-213xxxxx-g2vi': instance may belong to at most one load-balanced instance group., instanceInMultipleLoadBalancedIgs]

Then, it fails to add existing instances to that group:

# gcloud --project=my-project compute instance-groups describe k8s-ig--d9b10b849848d7bc
creationTimestamp: '2017-03-20T07:34:36.927-07:00'
description: ''
fingerprint: dzGYWBSWzfM=
id: '7276294416999011987'
isManaged: 'No'
kind: compute#instanceGroup
name: k8s-ig--d9b10b849848d7bc
namedPorts:
- name: port30081
  port: 30081
selfLink: https://www.googleapis.com/compute/v1/projects/my-project/zones/europe-west1-b/instanceGroups/k8s-ig--d9b10b849848d7bc
size: 0
zone: https://www.googleapis.com/compute/v1/projects/my-project/zones/europe-west1-b

Issuing the following command manually triggers the same error message that the GLBC gets:

# gcloud compute instance-groups unmanaged add-instances k8s-ig--d9b10b849848d7bc --instances=gke-my-project-eu-default-pool-213xxxxx-g2vi
ERROR: (gcloud.compute.instance-groups.unmanaged.add-instances) Some requests did not succeed:
 - Validation failed for instance 'projects/my-project/zones/europe-west1-b/instances/gke-my-project-eu-default-pool-213acba7-g2vi': instance may belong to at most one load-balanced instance group.

Also, deleting the failing GCE ingress leaves this new instance group dangling, it has to be deleted by hand.

Should I open a new issue? And/or a Google support ticket?

PS: sorry about the noise, I'm kind of semi-blindly debugging here.

FWIW, deploying the https://github.com/kubernetes/ingress/blob/master/controllers/gce/ingress-app.yaml sample leads to the same issue on a freshly spawned cluster with an ILB.

Step by step reproduction procedure

Create new cluster

# gcloud container clusters create test-glbc
Creating cluster test-glbc...done.
Created [https://container.googleapis.com/v1/projects/my-project/zones/europe-west1-b/clusters/test-glbc].
kubeconfig entry generated for test-glbc.
NAME       ZONE            MASTER_VERSION  MASTER_IP       MACHINE_TYPE   NODE_VERSION  NUM_NODES  STATUS
test-glbc  europe-west1-b  1.5.4           35.187.106.224  n1-standard-1  1.5.4         3          RUNNING

Get cluster credentials for kubectl:

# gcloud container clusters get-credentials test-glbc                                                                                                    
Fetching cluster endpoint and auth data.
kubeconfig entry generated for test-glbc.

Test GLBC ingress without ILB

# kubectl create -f ingress-app.yaml
service "echoheadersx" created
service "echoheadersy" created
replicationcontroller "echoheaders" created
ingress "echomap" created

Standalone creation OK:

# kubectl describe -f ingress-app.yaml
Name:                   echoheadersx
Namespace:              default
Labels:                 app=echoheaders
Selector:               app=echoheaders
Type:                   NodePort
IP:                     10.3.249.97
Port:                   http    80/TCP
NodePort:               http    30301/TCP
Endpoints:              10.0.0.4:8080
Session Affinity:       None
No events.


Name:                   echoheadersy
Namespace:              default
Labels:                 app=echoheaders
Selector:               app=echoheaders
Type:                   NodePort
IP:                     10.3.241.37
Port:                   http    80/TCP
NodePort:               http    30284/TCP
Endpoints:              10.0.0.4:8080
Session Affinity:       None
No events.


Name:           echoheaders
Namespace:      default
Image(s):       gcr.io/google_containers/echoserver:1.4
Selector:       app=echoheaders
Labels:         app=echoheaders
Replicas:       1 current / 1 desired
Pods Status:    1 Running / 0 Waiting / 0 Succeeded / 0 Failed
No volumes.
Events:
  FirstSeen     LastSeen        Count   From                            SubObjectPath   Type            Reason                  Message
  ---------     --------        -----   ----                            -------------   --------        ------                  -------
  2m            2m              1       {replication-controller }                       Normal          SuccessfulCreate        Created pod: echoheaders-71rn7


Name:                   echomap
Namespace:              default
Address:                35.186.251.144
Default backend:        echoheadersx:80 (10.0.0.4:8080)
Rules:
  Host          Path    Backends
  ----          ----    --------
  foo.bar.com
                /foo    echoheadersx:80 (10.0.0.4:8080)
  bar.baz.com
                /bar    echoheadersy:80 (10.0.0.4:8080)
                /foo    echoheadersx:80 (10.0.0.4:8080)
Annotations:
  backends:             {"k8s-be-30284--e235ee39f3afd43d":"Unknown","k8s-be-30301--e235ee39f3afd43d":"Unknown"}
  forwarding-rule:      k8s-fw-default-echomap--e235ee39f3afd43d
  target-proxy:         k8s-tp-default-echomap--e235ee39f3afd43d
  url-map:              k8s-um-default-echomap--e235ee39f3afd43d
Events:
  FirstSeen     LastSeen        Count   From                            SubObjectPath   Type            Reason  Message
  ---------     --------        -----   ----                            -------------   --------        ------  -------
  2m            2m              1       {loadbalancer-controller }                      Normal          ADD     default/echomap
  1m            1m              1       {loadbalancer-controller }                      Normal          CREATE  ip: 35.186.251.144
  1m            1m              2       {loadbalancer-controller }                      Normal          Service default backend set to echoheadersx:30301

Cleanup:

# kubectl delete -f ingress-app.yaml
service "echoheadersx" deleted
service "echoheadersy" deleted
replicationcontroller "echoheaders" deleted
ingress "echomap" deleted

Create internal load balancer

# gcloud compute health-checks create tcp test-glbc-http --port 80 --port-name http
Created [https://www.googleapis.com/compute/v1/projects/my-project/global/healthChecks/test-glbc-http].
NAME            PROTOCOL
test-glbc-http  TCP
# gcloud compute backend-services create test-glbc-be-http --protocol TCP --load-balancing-scheme=INTERNAL --region europe-west1 --health-checks=test-glbc-http
Created [https://www.googleapis.com/compute/v1/projects/my-project/regions/europe-west1/backendServices/test-glbc-be-http].
NAME               BACKENDS  PROTOCOL
test-glbc-be-http            TCP
# gcloud compute backend-services add-backend test-glbc-be-http --instance-group=gke-test-glbc-default-pool-db869d65-grp --instance-group-zone europe-west1-b --region europe-west1
Updated [https://www.googleapis.com/compute/v1/projects/my-project/regions/europe-west1/backendServices/test-glbc-be-http].
# gcloud compute forwarding-rules create test-glbc-ifr --load-balancing-scheme=INTERNAL --backend-service=test-glbc-be-http --ports=80 --region=europe-west1
Created [https://www.googleapis.com/compute/v1/projects/my-project/regions/europe-west1/forwardingRules/test-glbc-ifr].
---
IPAddress: 10.132.0.5
IPProtocol: TCP
backendService: https://www.googleapis.com/compute/v1/projects/my-project/regions/europe-west1/backendServices/test-glbc-be-http
creationTimestamp: '2017-03-20T11:06:43.978-07:00'
description: ''
id: '5639198685020398812'
kind: compute#forwardingRule
loadBalancingScheme: INTERNAL
name: test-glbc-ifr
network: https://www.googleapis.com/compute/v1/projects/my-project/global/networks/default
ports:
- '80'
region: europe-west1
selfLink: https://www.googleapis.com/compute/v1/projects/my-project/regions/europe-west1/forwardingRules/test-glbc-ifr
subnetwork: https://www.googleapis.com/compute/v1/projects/my-project/regions/europe-west1/subnetworks/default
# gcloud compute firewall-rules create test-glbc-fw-healthcheck --source-ranges=130.211.0.0/22 --target-tags=gke-test-glbc-090834e3-node --allow=TCP
Created [https://www.googleapis.com/compute/v1/projects/my-project/global/firewalls/test-glbc-fw-healthcheck].
NAME                      NETWORK  SRC_RANGES      RULES  SRC_TAGS  TARGET_TAGS
test-glbc-fw-healthcheck  default  130.211.0.0/22  tcp              gke-test-glbc-090834e3-node

Test GLBC ingress with ILB

# kubectl create -f ingress-app.yaml                                 
service "echoheadersx" created                                                                                                 
service "echoheadersy" created                                                                                                 
replicationcontroller "echoheaders" created
ingress "echomap" created

Creation with existing ILB failed:

# kubectl describe -f ingress-app.yaml
Name:                   echoheadersx
Namespace:              default
Labels:                 app=echoheaders
Selector:               app=echoheaders
Type:                   NodePort
IP:                     10.3.246.123
Port:                   http    80/TCP
NodePort:               http    30301/TCP
Endpoints:              10.0.0.5:8080
Session Affinity:       None
No events.


Name:                   echoheadersy
Namespace:              default
Labels:                 app=echoheaders
Selector:               app=echoheaders
Type:                   NodePort
IP:                     10.3.246.72
Port:                   http    80/TCP
NodePort:               http    30284/TCP
Endpoints:              10.0.0.5:8080
Session Affinity:       None
No events.


Name:           echoheaders
Namespace:      default
Image(s):       gcr.io/google_containers/echoserver:1.4
Selector:       app=echoheaders
Labels:         app=echoheaders
Replicas:       1 current / 1 desired
Pods Status:    1 Running / 0 Waiting / 0 Succeeded / 0 Failed
No volumes.
Events:
  FirstSeen     LastSeen        Count   From                            SubObjectPath   Type            Reason                  Message
  ---------     --------        -----   ----                            -------------   --------        ------                  -------
  37s           37s             1       {replication-controller }                       Normal          SuccessfulCreate        Created pod: echoheaders-qkn1n


Name:                   echomap
Namespace:              default
Address:
Default backend:        echoheadersx:80 (10.0.0.5:8080)
Rules:
  Host          Path    Backends
  ----          ----    --------
  foo.bar.com
                /foo    echoheadersx:80 (10.0.0.5:8080)
  bar.baz.com
                /bar    echoheadersy:80 (10.0.0.5:8080)
                /foo    echoheadersx:80 (10.0.0.5:8080)
Annotations:
Events:
  FirstSeen     LastSeen        Count   From                            SubObjectPath   Type            Reason  Message
  ---------     --------        -----   ----                            -------------   --------        ------  -------
  38s           38s             1       {loadbalancer-controller }                      Normal          ADD     default/echomap
  16s           1s              10      {loadbalancer-controller }                      Warning         GCE     [googleapi: Error 400: Validation failed for instance 'projects/my-project/zones/europe-west1-b/instances/gke-test-glbc-default-pool-db869d65-9chb': instance may belong to at most one load-balanced instance group., instanceInMultipleLoadBalancedIgs]

Ahh yes, the ILB and the GCLB have to share an IG, and the LB controller
has a baked-in assumption of the name. Easiest is to let the HTTP LB go
first.

@nicksardo - can you explain how to derive the IG name to use?

On Mon, Mar 20, 2017 at 11:21 AM, Patrick Decat notifications@github.com
wrote:

FWIW, deploying the https://github.com/kubernetes/ingress/blob/master/
controllers/gce/ingress-app.yaml sample leads to the same issue on a
freshly spawned cluster with an ILB.
Step by step reproduction procedure Create new cluster

gcloud container clusters create test-glbc

Creating cluster test-glbc...done.
Created [https://container.googleapis.com/v1/projects/my-project/zones/europe-west1-b/clusters/test-glbc].
kubeconfig entry generated for test-glbc.
NAME ZONE MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS
test-glbc europe-west1-b 1.5.4 35.187.106.224 n1-standard-1 1.5.4 3 RUNNING

Get cluster credentials for kubectl:

gcloud container clusters get-credentials test-glbc

Fetching cluster endpoint and auth data.
kubeconfig entry generated for test-glbc.

Test GLBC ingress without ILB

kubectl create -f ingress-app.yaml

service "echoheadersx" created
service "echoheadersy" created
replicationcontroller "echoheaders" created
ingress "echomap" created

Standalone creation OK:

kubectl describe -f ingress-app.yaml

Name: echoheadersx
Namespace: default
Labels: app=echoheaders
Selector: app=echoheaders
Type: NodePort
IP: 10.3.249.97
Port: http 80/TCP
NodePort: http 30301/TCP
Endpoints: 10.0.0.4:8080
Session Affinity: None
No events.

Name: echoheadersy
Namespace: default
Labels: app=echoheaders
Selector: app=echoheaders
Type: NodePort
IP: 10.3.241.37
Port: http 80/TCP
NodePort: http 30284/TCP
Endpoints: 10.0.0.4:8080
Session Affinity: None
No events.

Name: echoheaders
Namespace: default
Image(s): gcr.io/google_containers/echoserver:1.4
Selector: app=echoheaders
Labels: app=echoheaders
Replicas: 1 current / 1 desired
Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed
No volumes.
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
2m 2m 1 {replication-controller } Normal SuccessfulCreate Created pod: echoheaders-71rn7

Name: echomap
Namespace: default
Address: 35.186.251.144
Default backend: echoheadersx:80 (10.0.0.4:8080)
Rules:
Host Path Backends
---- ---- --------
foo.bar.com
/foo echoheadersx:80 (10.0.0.4:8080)
bar.baz.com
/bar echoheadersy:80 (10.0.0.4:8080)
/foo echoheadersx:80 (10.0.0.4:8080)
Annotations:
backends: {"k8s-be-30284--e235ee39f3afd43d":"Unknown","k8s-be-30301--e235ee39f3afd43d":"Unknown"}
forwarding-rule: k8s-fw-default-echomap--e235ee39f3afd43d
target-proxy: k8s-tp-default-echomap--e235ee39f3afd43d
url-map: k8s-um-default-echomap--e235ee39f3afd43d
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
2m 2m 1 {loadbalancer-controller } Normal ADD default/echomap
1m 1m 1 {loadbalancer-controller } Normal CREATE ip: 35.186.251.144
1m 1m 2 {loadbalancer-controller } Normal Service default backend set to echoheadersx:30301

Cleanup:

kubectl delete -f ingress-app.yaml

service "echoheadersx" deleted
service "echoheadersy" deleted
replicationcontroller "echoheaders" deleted
ingress "echomap" deleted

Create internal load balancer

gcloud compute health-checks create tcp test-glbc-http --port 80 --port-name http

Created [https://www.googleapis.com/compute/v1/projects/my-project/global/healthChecks/test-glbc-http].
NAME PROTOCOL
test-glbc-http TCP

gcloud compute backend-services create test-glbc-be-http --protocol TCP --load-balancing-scheme=INTERNAL --region europe-west1 --health-checks=test-glbc-http

Created [https://www.googleapis.com/compute/v1/projects/my-project/regions/europe-west1/backendServices/test-glbc-be-http].
NAME BACKENDS PROTOCOL
test-glbc-be-http TCP

gcloud compute backend-services add-backend test-glbc-be-http --instance-group=gke-test-glbc-default-pool-db869d65-grp --instance-group-zone europe-west1-b --region europe-west1

Updated [https://www.googleapis.com/compute/v1/projects/my-project/regions/europe-west1/backendServices/test-glbc-be-http].

gcloud compute forwarding-rules create test-glbc-ifr --load-balancing-scheme=INTERNAL --backend-service=test-glbc-be-http --ports=80 --region=europe-west1

Created [https://www.googleapis.com/compute/v1/projects/my-project/regions/europe-west1/forwardingRules/test-glbc-ifr].

IPAddress: 10.132.0.5
IPProtocol: TCP
backendService: https://www.googleapis.com/compute/v1/projects/my-project/regions/europe-west1/backendServices/test-glbc-be-http
creationTimestamp: '2017-03-20T11:06:43.978-07:00'
description: ''
id: '5639198685020398812'
kind: compute#forwardingRule
loadBalancingScheme: INTERNAL
name: test-glbc-ifr
network: https://www.googleapis.com/compute/v1/projects/my-project/global/networks/default
ports:

gcloud compute firewall-rules create test-glbc-fw-healthcheck --source-ranges=130.211.0.0/22 --target-tags=gke-test-glbc-090834e3-node --allow=TCP

Created [https://www.googleapis.com/compute/v1/projects/my-project/global/firewalls/test-glbc-fw-healthcheck].
NAME NETWORK SRC_RANGES RULES SRC_TAGS TARGET_TAGS
test-glbc-fw-healthcheck default 130.211.0.0/22 tcp gke-test-glbc-090834e3-node

Test GLBC ingress with ILB

kubectl create -f ingress-app.yaml

service "echoheadersx" created
service "echoheadersy" created
replicationcontroller "echoheaders" created
ingress "echomap" created

Creation with existing ILB failed:

kubectl describe -f ingress-app.yaml

Name: echoheadersx
Namespace: default
Labels: app=echoheaders
Selector: app=echoheaders
Type: NodePort
IP: 10.3.246.123
Port: http 80/TCP
NodePort: http 30301/TCP
Endpoints: 10.0.0.5:8080
Session Affinity: None
No events.

Name: echoheadersy
Namespace: default
Labels: app=echoheaders
Selector: app=echoheaders
Type: NodePort
IP: 10.3.246.72
Port: http 80/TCP
NodePort: http 30284/TCP
Endpoints: 10.0.0.5:8080
Session Affinity: None
No events.

Name: echoheaders
Namespace: default
Image(s): gcr.io/google_containers/echoserver:1.4
Selector: app=echoheaders
Labels: app=echoheaders
Replicas: 1 current / 1 desired
Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed
No volumes.
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
37s 37s 1 {replication-controller } Normal SuccessfulCreate Created pod: echoheaders-qkn1n

Name: echomap
Namespace: default
Address:
Default backend: echoheadersx:80 (10.0.0.5:8080)
Rules:
Host Path Backends
---- ---- --------
foo.bar.com
/foo echoheadersx:80 (10.0.0.5:8080)
bar.baz.com
/bar echoheadersy:80 (10.0.0.5:8080)
/foo echoheadersx:80 (10.0.0.5:8080)
Annotations:
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
38s 38s 1 {loadbalancer-controller } Normal ADD default/echomap
16s 1s 10 {loadbalancer-controller } Warning GCE [googleapi: Error 400: Validation failed for instance 'projects/my-project/zones/europe-west1-b/instances/gke-test-glbc-default-pool-db869d65-9chb': instance may belong to at most one load-balanced instance group., instanceInMultipleLoadBalancedIgs]

—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/ingress/issues/112#issuecomment-287852266,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AFVgVBxLF1pYL-xq0zjw8f8gjpAs45giks5rnsPGgaJpZM4LdSkM
.

Yes, recommend letting the ingress controller create the instance group.

k8s-ig--CLUSTERNAME

Example:
k8s-ig--fcc72fe46f7de048

For the uninitiated, where does clustername come from?

On Mon, Mar 20, 2017 at 12:13 PM, Nick Sardo notifications@github.com
wrote:

Yes, recommend letting the ingress controller create the instance group.

k8s-ig--CLUSTERNAME

Example:
k8s-ig--fcc72fe46f7de048

—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/ingress/issues/112#issuecomment-287867144,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AFVgVOJmgI6HiHKP_Pgp2NZCwwjTAd6bks5rns_FgaJpZM4LdSkM
.

How would one do that on GKE? Don't see the option to name the IG at
cluster creation time.

Le lun. 20 mars 2017 20:13, Nick Sardo notifications@github.com a écrit :

Yes, recommend letting the ingress controller create the instance group.

k8s-ig--CLUSTERNAME

Example:
k8s-ig--fcc72fe46f7de048

—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/ingress/issues/112#issuecomment-287867144,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AATcGkT03vIBjlgtWhSiOqLCHTPofKHCks5rns_FgaJpZM4LdSkM
.

Or do you mean to create a GKE cluster without any instance group? Is that possible?

GKE/Kube will automatically create an IG for the nodes in each zone, which is used to managed the nodes but not LB them. We create a second IG which is just for LB, which is currently managed by GLBC. As ILB becomes more built-in, we'll have to reconsider where that IG is managed and whether it should remain distinct from the primary IGs.

One reason the IGs are distinct: GKE nodepools allow you to have many "shapes" of machines, each with an IG (per zone). Flattening that into an IG of all nodes in a given zone makes LB setup a lot easier. But that IG logic should probably move to our cloud provider controller, rather than GLBC, and we might want to reconsider the name to make it more obvious.

@bowei @nicksardo @bprashanth

For the uninitiated, where does clustername come from?

The cluster name is stored in the config map under "ingress-uid". If it doesn't exist, the controller generates one from /dev/urandom and updates the configmap.

kubectl --namespace=kube-system get configmaps ingress-uid -o yaml

Indeed, the future of the LB instance groups is worth discussion.

For the record, deleting the ILB, creating the GCE ingress with the GLBC, then recreating the ILB using the IG managed by GLBC works.

Thanks @thockin, @nicksardo.

Did anybody come up with a migration tool for old backends?

Was this page helpful?
0 / 5 - 0 ratings

Related issues

geek876 picture geek876  Â·  3Comments

briananstett picture briananstett  Â·  3Comments

cxj110 picture cxj110  Â·  3Comments

lachlancooper picture lachlancooper  Â·  3Comments

oilbeater picture oilbeater  Â·  3Comments