We can access our argocd instance only from our office network, it's not publicly accessible.
The setup is that we use a public-facing AWS LB for kubernetes ingress. You can only connect to it from our office network. It doesn't accept connections from private addresses, so you can't hit that LB from within the kube cluster.
We set the url in argocd-cm to be https://argocd.external.foo.com. The thing is when argocd-server initializes dex via gooidc.NewProvider, the go-oidc lib tries to connect to the external address, which it can't from inside the cluster (LB doesn't allow connections from private address range), it just times out.
Is your problem that your argocd instance tries to talk to an external (outside of your cluster) SSO provider and your k8s cluster config does not allow this call to happen?
A way to confirm this is the case would be to connect to a running pod and do something like
curl <sso provider> --> to see if you can access the providercurl google.com --> to see if you can access the internetAlso is your SSO provider a public SSO or part of an internal network? If it is public, please provide the name so I can try this with my cluster
Is your problem that your argocd instance tries to talk to an external (outside of your cluster) SSO provider and your k8s cluster config does not allow this call to happen?
The issue is that for anyone using dex.config, Argo CD simultaneously acts as an OIDC provider (by reverse proxying dex), as well as an OIDC client to itself, connecting directly to what is specified in data.url. This means it goes out and comes back in through the LoadBalancer to talk to itself as an OIDC provider. In some environments, the configuration of this LoadBalancer does not permit its own cluster's VPC egress IPs (which is what we faced ourselves). Our workaround was to add our cluster's VPC egress IPs to the whitelist of the LoadBalancer.
Ideally Argo CD would talk to dex over the internal cluster service hostname, but advertise the public hostname. I'm sure theres ways to do this, possibly by using separate OIDC clients, but haven't looked too deeply.
I have a fix for this to use HTTP rewrite rule to make the communication go over cluster dex address instead of external address. Will be available in v0.12
Most helpful comment
I have a fix for this to use HTTP rewrite rule to make the communication go over cluster dex address instead of external address. Will be available in v0.12