Azure-docs: External IP always pending after days of trying

Created on 24 Aug 2018  Â·  22Comments  Â·  Source: MicrosoftDocs/azure-docs

Hello,
I have carefully been through the tutorial several times and cannot ever get an external IP. I have tried with and without Ingress, assigning a static IP, different namespaces. This has been ongoing now for a week. Any help would be very appreciated.

Thank you kindly!


Document Details

⚠ Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.

Pri1 container-servicsvc cxp docs-experience in-progress triaged

Most helpful comment

@koman I ran through the doc and it took about 2 mins for the external IP. I get the same behavior on the other docs when waiting for an eternal IP. Usually takes 2-4 mins.

I notice that the console does not always update though. So I usually Control+C out and rerun the --watch command to check the status.

Have you checked your instance after say about 10 mins? If it is still pending then I am assuming there is something specific with your environment that is causing the issue. And if that is the case we can get you in contact with support to get it resolved.

All 22 comments

I thought I'd add what I did today from a fresh start -

kubectl apply -f azure-vote-all-in-one-redis.yaml

deployment.apps/azure-vote-back created
service/azure-vote-back created
deployment.apps/azure-vote-front created
service/azure-vote-front created

kubectl get service azure-vote-front --watch

NAME               TYPE           CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
azure-vote-front   LoadBalancer   10.0.118.56   <pending>     80:31126/TCP   1m

kubectl describe service azure-vote-front

Name:                     azure-vote-front
Namespace:                default
Labels:                   <none>
Annotations:              kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"azure-vote-front","namespace":"default"},"spec":{"ports":[{"port":80}],"select...
Selector:                 app=azure-vote-front
Type:                     LoadBalancer
IP:                       10.0.118.56
Port:                     <unset>  80/TCP
TargetPort:               80/TCP
NodePort:                 <unset>  31126/TCP
Endpoints:                10.244.0.42:80
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>


I ran this again

kubectl get service azure-vote-front

NAME               TYPE           CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
azure-vote-front   LoadBalancer   10.0.118.56   <pending>     80:31126/TCP   1m

kubectl get pods

NAME                                READY     STATUS    RESTARTS   AGE
azure-vote-back-655476c7f7-nfhq6    1/1       Running   0          2m
azure-vote-front-74dd9d69c6-8swj4   1/1       Running   0          2m

Thanks for the feedback! We are currently investigating and will update you shortly.

@koman I ran through the doc and it took about 2 mins for the external IP. I get the same behavior on the other docs when waiting for an eternal IP. Usually takes 2-4 mins.

I notice that the console does not always update though. So I usually Control+C out and rerun the --watch command to check the status.

Have you checked your instance after say about 10 mins? If it is still pending then I am assuming there is something specific with your environment that is causing the issue. And if that is the case we can get you in contact with support to get it resolved.

@MicahMcKittrick-MSFT I'm also not seeing a delay of more than 3-4 minutes in obtaining external IP addresses when creating LBs or ingress controllers. Support may need to examine the subscription as to why the network resources can't be successfully deployed.

@koman If you're using advanced networking and the virtual network and subnet are in a different resource group than your AKS cluster, ensure that the AKS service principal is granted contributor permissions for network create operations on the virtual network's resource group.

@MicahMcKittrick-MSFT Thank you for your reply. I have tried everything you suggested but always pending. Could you kindly assist me in getting technical support to help with this. I have totally run out of ideas here. Thank you.
@iainfoulds Thank you, everything is in the same resource-group currently. I'm still new to this so trying to keep it as simple as possible for now ;-)

@koman no worries we will get it sorted out :)

Can you email me at [email protected] and provide me with your SubscriptionID and link to this GitHub issue?

@koman thanks for the email.

I will close this issue as we are taking the problem offline.

@MicahMcKittrick-MSFT could you please provide resolution to this issue? looks like it's common one
I'm facing the same issue with basic networking

Hi

I had to delete the cluster and create a new service principal with a new
cluster and then it worked.

Regards

On Thu, 25 Oct 2018 at 13:53, Iryna Bryndzei notifications@github.com
wrote:

@MicahMcKittrick-MSFT https://github.com/MicahMcKittrick-MSFT could you
please provide resolution to this issue? looks like it's common one

—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/MicrosoftDocs/azure-docs/issues/13935#issuecomment-433021404,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AA-537bNxo12vgDokDxtI64CYKbx7V3pks5uoaYzgaJpZM4WLTLB
.

>

Kind regards,
Koman Rudden

Yes this is a known issue that the product team is actively working on.
The work around that seems to work at @koman pointed out was deleting the cluster and recreating it. Not ideal but for now it seems to fix the problem.

I fixed this issue with creating sp for aks not using the ssl cert, but just a password.

Once I did that I could create ingress controllers and ambassador gateways
and it would always get an up address.

On Thu, 25 Oct 2018 at 21:46, Iryna Bryndzei notifications@github.com
wrote:

I fixed this issue with creating sp for aks not using the ssl cert, but
just a password.

—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/MicrosoftDocs/azure-docs/issues/13935#issuecomment-433181342,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AA-535PfpoAaaQTlwT1QLg3jh4kJZXauks5uohUggaJpZM4WLTLB
.

>

Kind regards,
Koman Rudden

Hi,
I am experiencing the same problem by following the tutorial.
I recreated the cluster (and the sp) two times but still External_IP address is still pending.
The only thing is that I am using my own (simpler) yaml:

apiVersion: apps/v1beta1 kind: Deployment metadata: name: pspacemvc spec: replicas: 1 strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 1 minReadySeconds: 5 template: metadata: labels: app: pspacemvc spec: containers: - name: pspacemvc image: pspaceacr.azurecr.io/pspacemvc:v1 ports: - containerPort: 9002 resources: requests: cpu: 250m limits: cpu: 500m env: - name: DEV value: "fileapi" apiVersion: v1 kind: Service metadata: name: pspacemvc spec: type: LoadBalancer ports: - port: 9002
selector:
app: pspacemvc`

Thank you for providing any help and directions!

My mistake - used wrong password while creating AKS.
Once I set the correct one, after about 2-3 minutes it got the External-IP.

I ran into the same issue. The external-ip was in pending state after hours. I tried with with WestUs, WestUs2 and EastUs. I fixed it by using a service principal instead of ssh keys.

$rg = "resource group name"
$aks = "K8 cluster name"

Create service principal
az ad sp create-for-rbac --skip-assignment

Note the appId and password from output
az aks create --resource-group $rg --name $aks --node-count 1 --enable-addons monitoring --service-principal 'appId from previous output' --client-secret 'password from previous output'
az aks get-credentials --resource-group $rg --name $aks
kubectl apply -f azure-vote.yaml
kubectl get service azure-vote-front --watch

The public IP address appeared after 2m58s for EastUs location.

Just for clarity, @prathul, that's two very different things. I don't think you were running into problems because of SSH keys.

Every AKS cluster has a service principal, it's just whether you pre-create one or let the AKS cluster create one during create time. The service principal is used for communication with other Azure resources.

Same for SSH keys. The --generate-ssh-keys generates the required SSH keys if they don't already exist. Leaving out the parameter means the cluster uses default keys found in ~/.ssh (by default). SSH keys aren't used for cluster communication with other Azure resources, only when you try to connect to the nodes yourself.

Is there any good way to debug this? I'm waiting 15m and still no external IP....

After a few years this is still a problem....

Create Static IP with --sku Standard. Without --sku Standard IP is created with SKU Basic.
Basic Static IP cannot use for Loadbalancers.

Take a look into the activity log, you see a warning like this:

Standard sku load balancer /subscriptions/55aa..../resourceGroups/MC_kubernetes-dev-kubernetes-dev-cluster_northeurope/providers/Microsoft.Network/loadBalancers/kubernetes cannot reference Basic sku publicIP /subscriptions/55aa..../resourceGroups/MC_kubernetes-dev_kubernetes-dev-cluster_northeurope/providers/Microsoft.Network/publicIPAddresses/kubernetes-dev-public-ip.

Add --sku and deploy your Service with this static IP
az network public-ip create --resource-group <MC_your-RG> --name Your-public-ip-name --sku Standard --allocation-method static --query publicIp.ipAddress -o tsv

At the end, what need to be kept in mind (atleast my experience):

Static external IP need to be in the same region as AKS cluster and also same SKU type as AKS cluster. Then it seem to work

@gitsupersmecher, April 2020 - still a problem :)
I just opened a ticket with Azure Support, I will share the feedback with the community.
On my end the service principal complains it cannot read a subnet that does not even exists ('default'). I don't know how to authorize the service principal to read a subnet that I want (not the want that he wants).

@gitsupersmecher, April 2020 - still a problem :)
I just opened a ticket with Azure Support, I will share the feedback with the community.
On my end the service principal complains it cannot read a subnet that does not even exists ('default'). I don't know how to authorize the service principal to read a subnet that I want (not the want that he wants).

Any response from the support?

Was this page helpful?
0 / 5 - 0 ratings

Related issues

JamesDLD picture JamesDLD  Â·  3Comments

DeepPuddles picture DeepPuddles  Â·  3Comments

Favna picture Favna  Â·  3Comments

Ponant picture Ponant  Â·  3Comments

JeffLoo-ong picture JeffLoo-ong  Â·  3Comments