Production and stage have both been configured to use tls but certificates are self signed
Configure cluster with jx boot and configure tls for production and staging:
jx-requirements:
environments:
- ingress:
cloud_dns_secret_name: external-dns-gcp-sa
domain: domain.tld
externalDNS: true
namespaceSubDomain: -jx.
tls:
email: [email protected]
enabled: true
production: true
key: dev
- ingress:
domain: domain.tld
externalDNS: true
namespaceSubDomain: ""
tls:
email: [email protected]
enabled: true
production: true
key: staging
certs to be issued from letsencrypt
Self signed kubernetes certs are being issued
The output of jx version is:
jx version
NAME VERSION
jx 2.0.976
Kubernetes cluster v1.13.11-gke.9
kubectl v1.16.2
helm client Client: v2.13.1+g618447c
git 2.17.1
Operating System Ubuntu 18.04.3 LT
jx create cluster gke --skip-installation -n clustername --region=us-west1 --max-num-nodes=9 --min-num-nodes=1
$ jx version
.
.
.
Operating System Ubuntu 18.04.3 LTS
I can see the following messages in the cert-manager logs:
2019-11-13T18:39:08.394746Z cert-manager/controller/ingress-shim "level"=0 "msg"="syncing item" "key"="jx-production/appname" I
2019-11-13T18:39:08.395174Z cert-manager/controller/ingress-shim "level"=0 "msg"="failed to determine issuer to be used for ingress resource" "resource_kind"="Ingress" "resource_name"="appname" "resource_namespace"="jx-production" I
2019-11-13T18:39:08.395413Z cert-manager/controller/ingress-shim "level"=0 "msg"="finished processing work item" "key"="jx-production/appname" I
2019-11-13T18:40:01.261726Z cert-manager/controller/ingress-shim "level"=0 "msg"="syncing item" "key"="jx-production/appname" I
2019-11-13T18:40:01.262213Z cert-manager/controller/ingress-shim "level"=0 "msg"="failed to determine issuer to be used for ingress resource" "resource_kind"="Ingress" "resource_name"="appname" "resource_namespace"="jx-production" I
2019-11-13T18:40:01.262489Z cert-manager/controller/ingress-shim "level"=0 "msg"="finished processing work item" "key"="jx-production/appname" I
The content of environment-clustername-production/env/values.yaml is:
PipelineSecrets: {}
cleanup:
Annotations:
helm.sh/hook: pre-delete
helm.sh/hook-delete-policy: hook-succeeded
Args:
- --cleanup
expose:
Annotations:
helm.sh/hook: post-install,post-upgrade
helm.sh/hook-delete-policy: hook-succeeded
Args:
- --v
- 4
config:
domain: domain.tld
exposer: Ingress
http: "false"
tlsSecretName: tls-domain.tld-p
tlsacme: "true"
urltemplate: '{{.Service}}-{{.Namespace}}.{{.Domain}}'
production: true
jenkins:
Servers:
Global: {}
prow: {}
The workarounds listed here seems relevant https://github.com/jenkins-x/jx/issues/5310#issuecomment-528263468
I ended up working around the issue by running jx upgrade ingress --namespace=jx-production.
One relevant quirk I noticed about this worked is the use of the url template. _Background:_ Although the cluster was originally configured with a pattern that follows urltemplate: '{{.Service}}-{{.Namespace}}.{{.Domain}}' changing that field has no effect in my cluster.
_Quirk_: When prompted during the upgrade ingress for ? URLTemplate (press <Enter> to keep the current value): and just pressing <Enter> results in the _default_ template, not the current, to be applied along with proper certs from let's encrypt.
I noticed thatjx upgrade ingress only works for existing ingresses anyway. As soon as a new one is added (e.g. new application is promoted for the first time) it does not get a valid certificate. IMO it would be good if it would just do a dns-01 challenge for a wildcard certificate like how it is done on the jx namespace on initial installation, and re-use that certificate for all ingresses on that domain.
Yeah I've seen the same behavior.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Provide feedback via https://jenkins-x.io/community.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Provide feedback via https://jenkins-x.io/community.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Provide feedback via https://jenkins-x.io/community.
/close
@jenkins-x-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.
Provide feedback via https://jenkins-x.io/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the jenkins-x/lighthouse repository.
Most helpful comment
I noticed that
jx upgrade ingressonly works for existing ingresses anyway. As soon as a new one is added (e.g. new application is promoted for the first time) it does not get a valid certificate. IMO it would be good if it would just do adns-01challenge for a wildcard certificate like how it is done on thejxnamespace on initial installation, and re-use that certificate for all ingresses on that domain.