Jx: jenkins X never came up on my kubernetes cluster on AWS

Created on 1 Feb 2019  Β·  24Comments  Β·  Source: jenkins-x/jx

Summary

I kickstarted to use jenkinsX instead of using our standalone jenkins via kubernetes helm chart. I set the cluster and context and also added the bitbucket that we use for git repo.

jx install --provider aws --domain jenkins....com --git-provider-url='https://bitbucket.org//jenkinsx' --git-username='' --git-api-token='' --namespace=jenkins

Namespace jenkins created
Using helmBinary helm with feature flag: none
Context "" modified.
Storing the kubernetes provider aws in the TeamSettings
Git configured for user: Samyak Rout and email [email protected]
Using helm2
Configuring tiller
Tiller Deployment is running in namespace kube-system
existing ingress controller found, no need to install a new one
Waiting for external loadbalancer to be created and update the nginx-ingress-controller service in kube-system namespace
External loadbalancer created
Waiting to find the external host name of the ingress controller Service in namespace kube-system with name jxing-nginx-ingress-controller
About to insert/update DNS CNAME record into HostedZone /hostedzone/Z3LZPTYR5VY4QP with wildcard *.jenkins....com pointing to a6ab6b35825a711e9b1ee028efbc6ab8-47ba891e82d6a967.elb.us-east-2.amazonaws.com
Updated HostZone ID /hostedzone/Z3LZPTYR5VY4QP successfully
nginx ingress controller installed and configured
Lets set up a Git user name and API token to be able to perform CI/CD

? Do you wish to use samyakr as the local Git user for https://bitbucket.org/samyakr/jenkinsx server: Yes
Select the CI/CD pipelines Git server and user
? Do you wish to use https://bitbucket.org/samyakr/jenkinsx as the pipelines Git server: Yes
? Do you wish to use samyakr as the pipelines Git user for https://bitbucket.org/samyakr/jenkinsx server: Yes
Setting the pipelines Git server https://bitbucket.org/samyakr/jenkinsx and user name samyakr.
Saving the Git authentication configurationCurrent configuration dir: /Users/samyakrout/.jx
options.Flags.CloudEnvRepository: https://github.com/jenkins-x/cloud-environments
options.Flags.LocalCloudEnvironment: false
Cloning the Jenkins X cloud environments repo to /Users/samyakrout/.jx/cloud-environments
? A local Jenkins X cloud environments repository already exists, recreate with latest? Yes
Current configuration dir: /Users/samyakrout/.jx
options.Flags.CloudEnvRepository: https://github.com/jenkins-x/cloud-environments
options.Flags.LocalCloudEnvironment: false
Cloning the Jenkins X cloud environments repo to /Users/samyakrout/.jx/cloud-environments
Enumerating objects: 8, done.
Counting objects: 100% (8/8), done.
Compressing objects: 100% (6/6), done.
Total 1373 (delta 2), reused 6 (delta 2), pack-reused 1365
? Select Jenkins installation type: Serverless Jenkins
No default password set, generating a random one
Generated helm values /Users/samyakrout/.jx/extraValues.yaml
Creating Secret jx-install-config in namespace jenkins
Installing Jenkins X platform helm chart from: /Users/samyakrout/.jx/cloud-environments/env-aws

Installing knative into namespace jenkins
Updating Helm repository...
Helm repository update done.
Upgrading Chart 'upgrade --namespace jenkins --install --force --timeout 6000 --set build.auth.git.username=-admin --set build.auth.git.password= --set --set tillerNamespace= knative-build jenkins-x/knative-build'

retrying after error:failed to run 'helm upgrade --namespace jenkins --install --force --timeout 6000 --set build.auth.git.username=-admin --set build.auth.git.password= * --set tillerNamespace= knative-build jenkins-x/knative-build' command in directory '', output: 'Release "knative-build" does not exist. Installing it now.
Error: apiVersion "caching.internal.knative.dev/v1alpha1" in knative-build/templates/image-git-init.yaml is not available'

Updating Helm repository...
Helm repository update done.
Upgrading Chart 'upgrade --namespace jenkins --install --force --timeout 6000 --set build.auth.git.username=-admin --set build.auth.git.password= --set --set tillerNamespace= knative-build jenkins-x/knative-build'
Waiting for tiller pod to be ready, service account name is tiller, namespace is jenkins, tiller namespace is kube-system
Waiting for cluster role binding to be defined, named tiller-role-binding in namespace jenkins
tiller cluster role defined: cluster-admin in namespace jenkins
tiller pod running
? Pick workload build pack: Kubernetes Workloads: Automated CI+CD with GitOps Promotion
Setting the team build pack to kubernetes-workloads repo: https://github.com/jenkins-x-buildpacks/jenkins-x-kubernetes.git ref: master
Installing jx into namespace jenkins
Installing jenkins-x-platform version: 0.0.3321
Adding values file /Users/samyakrout/.jx/cloud-environments/env-aws/myvalues.yaml
Adding values file /Users/samyakrout/.jx/adminSecrets.yaml
Adding values file /Users/samyakrout/.jx/extraValues.yaml
Adding values file /Users/samyakrout/.jx/cloud-environments/env-aws/secrets.yaml
Upgrading Chart 'upgrade --namespace jenkins --install --timeout 6000 --version 0.0.3321 --values /Users/samyakrout/.jx/cloud-environments/env-aws/myvalues.yaml --values /Users/samyakrout/.jx/adminSecrets.yaml --values /Users/samyakrout/.jx/extraValues.yaml --values /Users/samyakrout/.jx/cloud-environments/env-aws/secrets.yaml jenkins-x jenkins-x/jenkins-x-platform'
error: installing the Jenkins X platform: failed to install/upgrade the jenkins-x platform chart: failed to run 'helm upgrade --namespace jenkins --install --timeout 6000 --version 0.0.3321 --values /Users/samyakrout/.jx/cloud-environments/env-aws/myvalues.yaml --values /Users/samyakrout/.jx/adminSecrets.yaml --values /Users/samyakrout/.jx/extraValues.yaml --values /Users/samyakrout/.jx/cloud-environments/env-aws/secrets.yaml jenkins-x jenkins-x/jenkins-x-platform' command in directory '/Users/samyakrout/.jx/cloud-environments/env-aws', output: 'Error: UPGRADE FAILED: "jenkins-x" has no deployed releases'

Steps to reproduce the behavior

I went through the pre checks:

brew tap jenkins-x/jx
brew install jx
https://jenkins-x.io/getting-started/install-on-cluster/

Expected behavior

I was expecting the jenkinsx will be spinned up under the new jenkins namespace but hat didnt worked and it failed.

When I started cleaning the resources as there were failed helm charts related to the jenkins-x it could not, and threw error as below:
(⎈ |devops3-vlocity:jenkins)➜ env-aws helm delete --purge jenkins-x
Error: jobs.batch "cleanup" already exists
(⎈ |devops3-vlocity:jenkins)➜ env-aws helm delete --purge jenkins-x
Error: timed out waiting for the condition

And this get stucked in it.

Actual behavior

Expecting all the steps to be working on my cluster and that I can access Jenkins using my custom dns name

Jx version

The output of jx version is:

jx version
NAME               VERSION
jx                 1.3.821
jenkins x platform 0.0.3321
Kubernetes cluster v1.10.11
kubectl            v1.8.4
helm client        v2.9.1+g20adb27
helm server        canary+unreleased+g7161095
git                git version 2.17.0
Operating System   Mac OS X 10.13.4 build 17E199

Jenkins type

  • [ ] Classic Jenkins
  • [x] Serverless Jenkins
    I selected Serverless Jenkins

Kubernetes cluster


I used kube-spray and ansible to create the k8s cluster on AWS and my k8s version is (⎈ |devops3-vlocity:jenkins)➜ cloud-environments git:(master) kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.4", GitCommit:"9befc2b8928a9426501d3bf62f72849d5cbcd5a3", GitTreeState:"clean", BuildDate:"2017-11-20T05:28:34Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.11", GitCommit:"637c7e288581ee40ab4ca210618a89a555b6e7e9", GitTreeState:"clean", BuildDate:"2018-11-26T14:25:46Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}

Operating system / Environment


coreos is the server worker OS but running the jx command from the client that is my macOS

areaws areinstall areprow kinbug lifecyclrotten prioritimportant-longterm

Most helpful comment

Facing the same issue setting up JenkinsX on AWS with BitBucket server. hook and tide containers are crashing as they try to connect to github.com APIs irrespective of what GIT server was provided (In my case it is BitBucket and Not GitHub). When I used GitHub, none of the containers crashed, but the setup froze on this step: "waiting for install to be ready, if this is the first time then it will take a while to download images\n"
So looks like there are two issues with JX AWS EKS setup -

  1. GIT server selection - only GitHub works
  2. Possibly with serverless tekton setup as mentioned by @AirbornePorcine

All 24 comments

@Sam123ben Please could you provide a bit more state from your cluster? Which deployment is failing? Are there any pods failing or stuck in creating state? Thanks

Helm:

jenkins-x 1 Fri Feb 1 12:30:37 2019 FAILED jenkins-x-platform-0.0.3321 jenkins
jxing 1 Fri Feb 1 09:27:39 2019 DEPLOYED nginx-ingress-1.1.0 kube-system

Resources:
The full output of my jenkins namespace that include some of my own resources as well:

kubectl get pods,svc,deployments,configmaps,secrets,ingress -n jenkins
NAME READY STATUS RESTARTS AGE
po/build-controller-6d8c58db8b-vfqmx 1/1 Running 0 2m
po/buildnum-57cc87df67-zxq4b 1/1 Running 0 2m
po/crier-77c55b9864-5f2rz 1/1 Running 0 2m
po/deck-f569d7469-2vdcx 0/1 Running 5 2m
po/deck-f569d7469-lcz2v 0/1 Running 5 2m
po/hook-796f9c597c-xcfjr 0/1 CrashLoopBackOff 4 2m
po/hook-796f9c597c-xzcfd 0/1 CrashLoopBackOff 4 2m
po/horologium-6b89fdb77f-48hx5 1/1 Running 0 2m
po/jenkins-5596456d45-bmhxl 1/1 Running 0 6h
po/jenkins-nginx-ingress-controller-58fcd5c87f-srvkn 1/1 Running 0 6h
po/jenkins-nginx-ingress-default-backend-7849b87687-5b7w9 1/1 Running 0 6h
po/pipeline-666b847fb-b4dh4 1/1 Running 0 2m
po/plank-774ccd595f-jt2xv 1/1 Running 0 2m
po/prow-build-5b5db9c768-p96g8 1/1 Running 0 2m
po/sinker-79596b66f8-g4b85 1/1 Running 0 2m
po/tide-676f949f4c-8c46q 0/1 CrashLoopBackOff 4 2m

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/build-controller ClusterIP 10.233.39.113 9090/TCP 2m
svc/buildnum ClusterIP 10.233.27.38 80/TCP 2m
svc/deck ClusterIP 10.233.49.186 80/TCP 2m
svc/hook ClusterIP 10.233.9.13 80/TCP 2m
svc/jenkins ClusterIP 10.233.31.109 80/TCP 6h
svc/jenkins-jnlp ClusterIP 10.233.62.31 50000/TCP 6h
svc/jenkins-nginx-ingress-controller LoadBalancer 10.233.55.75 a24bdcf3325d5... 80:32658/TCP,443:30101/TCP 6h
svc/jenkins-nginx-ingress-default-backend ClusterIP 10.233.51.138 80/TCP 6h
svc/tide ClusterIP 10.233.36.145 80/TCP 2m

NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/build-controller 1 1 1 1 2m
deploy/buildnum 1 1 1 1 2m
deploy/crier 1 1 1 1 2m
deploy/deck 2 2 2 0 2m
deploy/hook 2 2 2 0 2m
deploy/horologium 1 1 1 1 2m
deploy/jenkins 1 1 1 1 6h
deploy/jenkins-nginx-ingress-controller 1 1 1 1 6h
deploy/jenkins-nginx-ingress-default-backend 1 1 1 1 6h
deploy/pipeline 1 1 1 1 2m
deploy/plank 1 1 1 1 2m
deploy/prow-build 1 1 1 1 2m
deploy/sinker 1 1 1 1 2m
deploy/tide 1 1 1 0 2m

NAME DATA AGE
cm/config 1 8h
cm/config-logging 5 2m
cm/ingress-config 5 8h
cm/ingress-controller-leader-nginx 0 6h
cm/jenkins 6 6h
cm/jenkins-groovy-hooks 1 6h
cm/jenkins-nginx-ingress-controller 1 6h
cm/jenkins-slave-dot-ssh 1 6h
cm/jx-install-config 3 8h
cm/plugins 2 8h

NAME TYPE DATA AGE
secrets/build-controller-token-wwbwg kubernetes.io/service-account-token 3 2m
secrets/buildnum-token-l6hxj kubernetes.io/service-account-token 3 2m
secrets/crier-token-6cxpp kubernetes.io/service-account-token 3 2m
secrets/deck-token-ltzzw kubernetes.io/service-account-token 3 2m
secrets/default-token-lk5pt kubernetes.io/service-account-token 3 8h
secrets/helm-token-klzbv kubernetes.io/service-account-token 3 2m
secrets/hmac-token Opaque 1 2m
secrets/hook-token-pxvln kubernetes.io/service-account-token 3 2m
secrets/horologium-token-khnzf kubernetes.io/service-account-token 3 2m
secrets/jenkins-bitbucket-ssh fabric8.io/jenkins-bitbucket-ssh 2 6h
secrets/jenkins-codecommit-ssh Opaque 2 6h
secrets/jenkins-docker-cfg fabric8.io/jenkins-docker-cfg 1 6h
secrets/jenkins-hub-api-token fabric8.io/jenkins-hub-api-token 1 6h
secrets/jenkins-master-ssh fabric8.io/jenkins-master-ssh 1 6h
secrets/jenkins-maven-settings fabric8.io/secret-maven-settings 1 6h
secrets/jenkins-nginx-ingress-token-gnzrp kubernetes.io/service-account-token 3 6h
secrets/jenkins-release-gpg fabric8.io/jenkins-release-gpg 4 6h
secrets/jenkins-ssh Opaque 4 6h
secrets/jenkins-ssh-config fabric8.io/jenkins-ssh-config 1 6h
secrets/jenkins-token-vwnjf kubernetes.io/service-account-token 3 6h
secrets/jx-install-config Opaque 2 8h
secrets/knative-basic-user-pass kubernetes.io/basic-auth 2 2m
secrets/knative-build-bot-token-k7gkv kubernetes.io/service-account-token 3 2m
secrets/oauth-token Opaque 1 2m
secrets/pipeline-token-5rqwq kubernetes.io/service-account-token 3 2m
secrets/plank-token-24d7b kubernetes.io/service-account-token 3 2m
secrets/prow-build-token-j2tqj kubernetes.io/service-account-token 3 2m
secrets/sinker-token-gb8z7 kubernetes.io/service-account-token 3 2m
secrets/tide-token-krfvq kubernetes.io/service-account-token 3 2m

po/tide-676f949f4c-8c46q 0/1 CrashLoopBackOff 4 2m
po/deck-f569d7469-2vdcx 0/1 Running 5 2m
po/deck-f569d7469-lcz2v 0/1 Running 5 2m
po/hook-796f9c597c-xcfjr 0/1 CrashLoopBackOff 4 2m
po/hook-796f9c597c-xzcfd 0/1 CrashLoopBackOff 4 2m

But infact the major isue is that I am not able to delete the helm chart

It would great if you could provide the output of kubectl describe && kubectl logs on pods which are in CrashLoop. Thanks

kubectl describe po/tide-676f949f4c-8c46q -n jenkins
Name: tide-676f949f4c-8c46q
Namespace: jenkins
Node: ip-172-18-2-184.us-east-2.compute.internal/172.18.2.184
Start Time: Fri, 01 Feb 2019 21:34:36 +1100
Labels: app=tide
pod-template-hash=2329505907
Annotations:
Status: Running
IP: 10.233.81.93
Controlled By: ReplicaSet/tide-676f949f4c
Containers:
tide:
Container ID: docker://217c36254fa083e179a4324bd6edb94e2c137ddc0e59a49cc1ebbc8f810186bc
Image: jenkinsxio/tide:pipeline1
Image ID: docker-pullable://jenkinsxio/tide@sha256:253389a707188d64cdbd06fdb190cca11b60261e26c010355c2b54862c4f84c6
Port: 8888/TCP
Args:
--dry-run=false
--github-endpoint=https://api.github.com
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Fri, 01 Feb 2019 22:42:05 +1100
Finished: Fri, 01 Feb 2019 22:42:06 +1100
Ready: False
Restart Count: 19
Limits:
cpu: 200m
memory: 256Mi
Requests:
cpu: 100m
memory: 128Mi
Liveness: http-get http://:http/ delay=0s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:http/ delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
Mounts:
/etc/config from config (ro)
/etc/github from oauth (ro)
/var/run/secrets/kubernetes.io/serviceaccount from tide-token-krfvq (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
oauth:
Type: Secret (a volume populated by a Secret)
SecretName: oauth-token
Optional: false
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: config
Optional: false
tide-token-krfvq:
Type: Secret (a volume populated by a Secret)
SecretName: tide-token-krfvq
Optional: false
QoS Class: Burstable
Node-Selectors:
Tolerations: dedicated=jenkins
node.kubernetes.io/memory-pressure:NoSchedule
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BackOff 2m (x326 over 1h) kubelet, ip-172-18-2-184.us-east-2.compute.internal Back-off restarting failed container

kubectl describe po/hook-796f9c597c-xzcfd -n jenkins
Name: hook-796f9c597c-xzcfd
Namespace: jenkins
Node: ip-172-18-2-172.us-east-2.compute.internal/
Start Time: Fri, 01 Feb 2019 21:34:36 +1100
Labels: app=hook
pod-template-hash=3529571537
Annotations:
Status: Running
IP: 10.233.69.49
Controlled By: ReplicaSet/hook-796f9c597c
Containers:
hook:
Container ID: docker://97e5d77fb49931aa48ea9b14162959cdd8ecfcc29d586048d73ff2f06fa0a787
Image: jenkinsxio/hook:pipeline1
Image ID: docker-pullable://jenkinsxio/hook@sha256:ad2924788e7e71bb4939b94bd056a5ef83b5b6b443cb86e03bb4d8be4356e5df
Port: 8888/TCP
Args:
--dry-run=false
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Fri, 01 Feb 2019 22:47:13 +1100
Finished: Fri, 01 Feb 2019 22:47:13 +1100
Ready: False
Restart Count: 19
Limits:
cpu: 400m
memory: 256Mi
Requests:
cpu: 200m
memory: 128Mi
Liveness: http-get http://:http/ delay=0s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:http/ delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
Mounts:
/etc/config from config (ro)
/etc/github from oauth (ro)
/etc/plugins from plugins (ro)
/etc/webhook from hmac (ro)
/var/run/secrets/kubernetes.io/serviceaccount from hook-token-pxvln (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
hmac:
Type: Secret (a volume populated by a Secret)
SecretName: hmac-token
Optional: false
oauth:
Type: Secret (a volume populated by a Secret)
SecretName: oauth-token
Optional: false
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: config
Optional: false
plugins:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: plugins
Optional: false
hook-token-pxvln:
Type: Secret (a volume populated by a Secret)
SecretName: hook-token-pxvln
Optional: false
QoS Class: Burstable
Node-Selectors:
Tolerations: dedicated=jenkins
node.kubernetes.io/memory-pressure:NoSchedule
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BackOff 3m (x329 over 1h) kubelet, ip-172-18-2-172.us-east-2.compute.internal Back-off restarting failed container

kubectl describe po/deck-f569d7469-2vdcx -n jenkins
Name: deck-f569d7469-2vdcx
Namespace: jenkins
Node: ip-172-18-5-62.us-east-2.compute.internal/172.18.5.62
Start Time: Fri, 01 Feb 2019 21:34:36 +1100
Labels: app=deck
pod-template-hash=912583025
Annotations:
Status: Running
IP: 10.233.67.106
Controlled By: ReplicaSet/deck-f569d7469
Containers:
deck:
Container ID: docker://024ec1fd9629b39a08a0e52a2df44e50822d12bfc711f6637ef15391cba1693d
Image: jenkinsxio/deck:pipeline1
Image ID: docker-pullable://jenkinsxio/deck@sha256:8782484af385a2a605dbb305d124805995de16f80064143dfe8cedd1080110f9
Port: 8080/TCP
Args:
--hook-url=http://hook/plugin-help
--tide-url=http://tide
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 2
Started: Fri, 01 Feb 2019 22:44:46 +1100
Finished: Fri, 01 Feb 2019 22:45:05 +1100
Ready: False
Restart Count: 29
Limits:
cpu: 200m
memory: 256Mi
Requests:
cpu: 100m
memory: 128Mi
Liveness: http-get http://:http/ delay=0s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:http/ delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
Mounts:
/etc/config from config (ro)
/var/run/secrets/kubernetes.io/serviceaccount from deck-token-ltzzw (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: config
Optional: false
deck-token-ltzzw:
Type: Secret (a volume populated by a Secret)
SecretName: deck-token-ltzzw
Optional: false
QoS Class: Burstable
Node-Selectors:
Tolerations: dedicated=jenkins
node.kubernetes.io/memory-pressure:NoSchedule
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BackOff 8m (x270 over 1h) kubelet, ip-172-18-5-62.us-east-2.compute.internal Back-off restarting failed container
Warning Unhealthy 3m (x56 over 1h) kubelet, ip-172-18-5-62.us-east-2.compute.internal Readiness probe failed: Get http://10.233.67.106:8080/: dial tcp 10.233.67.106:8080: getsockopt: connection refused

(⎈ |devops3-vlocity:jenkins)➜ vof_env_devops2_vlocity git:(master) βœ— kubectl logs -f deck-f569d7469-lcz2v -n jenkins
time="2019-02-01T11:45:12Z" level=info msg="Spyglass registered viewer build-log-viewer with title Build Log."
time="2019-02-01T11:45:12Z" level=info msg="Spyglass registered viewer junit-viewer with title JUnit."
time="2019-02-01T11:45:12Z" level=info msg="Spyglass registered viewer metadata-viewer with title Metadata."
(⎈ |devops3-vlocity:jenkins)➜ vof_env_devops2_vlocity git:(master) βœ—

(⎈ |devops3-vlocity:jenkins)➜ vof_env_devops2_vlocity git:(master) βœ— kubectl logs -f hook-796f9c597c-xzcfd -n jenkins
{"component":"hook","error":"error getting bot name: fetching bot name from GitHub: status code 401 not one of [200], body: {\"message\":\"Bad credentials\",\"documentation_url\":\"https://developer.github.com/v3\"}","level":"fatal","msg":"Error getting Git client.","time":"2019-02-01T11:47:13Z"

kubectl logs -f tide-676f949f4c-8c46q -n jenkins
{"client":"github","component":"tide","level":"info","msg":"Throttle(800, 39)","time":"2019-02-01T11:47:08Z"}
{"client":"github","component":"tide","level":"info","msg":"Throttle(400, 200)","time":"2019-02-01T11:47:08Z"}
{"component":"tide","error":"error getting bot name: fetching bot name from GitHub: status code 401 not one of [200], body: {\"message\":\"Bad credentials\",\"documentation_url\":\"https://developer.github.com/v3\"}","level":"fatal","msg":"Error getting Git client.","time":"2019-02-01T11:47:08Z"}

@ccojocar Thanks for responding, Have shared all the details that might be helpful for your team to troubleshoot the issue or error.

@ccojocar Hi, any updates on this as I need to create poc on the same to be able to use it for our projects. I am currently blocked with the implementation of the same

@ccojocar Not sure if anyone could help me with this

@Sam123ben Wondering if you could try to install with the lates version? It seems that there were some issues with the knatvie chart in the initial installation:

Error: apiVersion "caching.internal.knative.dev/v1alpha1" in knative-build/templates/image-git-init.yaml is not available'

I'm having the exact same issue on a brand-new installation against Minikube. @ccojocar - re: your suggesting to "install the latest version" - what does that mean? I got jx from Homebrew just yesterday.

Logs:

➜  ~ kubectl logs tide-7ddccdcc99-72nw5
{"client":"github","component":"tide","level":"info","msg":"Throttle(800, 39)","time":"2019-03-13T19:10:08Z"}
{"client":"github","component":"tide","level":"info","msg":"Throttle(400, 200)","time":"2019-03-13T19:10:08Z"}
{"component":"tide","error":"error getting bot name: fetching bot name from GitHub: status code 401 not one of [200], body: {\"message\":\"Bad credentials\",\"documentation_url\":\"https://developer.github.com/v3\"}","level":"fatal","msg":"Error getting Git client.","time":"2019-03-13T19:10:09Z"}
➜  ~ kubectl logs deck-cbb8dfd87-fcl6k
time="2019-03-13T19:10:32Z" level=info msg="Spyglass registered viewer buildlog with title Build Log." 
time="2019-03-13T19:10:32Z" level=info msg="Spyglass registered viewer junit with title JUnit." 
time="2019-03-13T19:10:32Z" level=info msg="Spyglass registered viewer metadata with title Metadata." 
➜  ~ kubectl logs hook-6bf85c9ccf-vtvgg
{"component":"hook","error":"error getting bot name: fetching bot name from GitHub: status code 401 not one of [200], body: {\"message\":\"Bad credentials\",\"documentation_url\":\"https://developer.github.com/v3\"}","level":"fatal","msg":"Error getting Git client.","time":"2019-03-13T19:14:23Z"}
➜  ~ 

So this looks more and more like a bug with prow/tide. The exact error message seems to be coming from here: https://github.com/kubernetes/test-infra/blob/56ad7645150efd9b46b296f81aec1918ce039c98/prow/github/client.go#L538

I think this bug is a dup of #2285 ?

Just chiming in - same issue here when attempting to install on EKS with bitbucket via jx install --no-default-environments --git-provider-url=https://bitbucket.org --git-provider-kind=bitbucketcloud --git-private=true. Note: this failure condition only occurs when choosing the serverless Tekton setup. I can still install the static version with no issues.

Facing the same issue setting up JenkinsX on AWS with BitBucket server. hook and tide containers are crashing as they try to connect to github.com APIs irrespective of what GIT server was provided (In my case it is BitBucket and Not GitHub). When I used GitHub, none of the containers crashed, but the setup froze on this step: "waiting for install to be ready, if this is the first time then it will take a while to download images\n"
So looks like there are two issues with JX AWS EKS setup -

  1. GIT server selection - only GitHub works
  2. Possibly with serverless tekton setup as mentioned by @AirbornePorcine

ε€§δ½¬δ»¬γ€‚ε…Άδ»–ηš„GIT Serverδ»€δΉˆζ—Άε€™ε―δ»₯η”¨ε‘’οΌŸ

Facing the same issue.tide pod always tries to connect to GitHub even when specifying BitBucket.

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Provide feedback via https://jenkins-x.io/community.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Provide feedback via https://jenkins-x.io/community.
/lifecycle rotten

Hi All,
I am facing simlar kind of issue , i installed jenkins x in already existing on premises k8s cluster and found that tide,hook,nexus and pipelinerunner pods are crashing.

crier-5945df9987-b9qpz 1/1 Running 0 20m
deck-c554d4f8f-mn9ct 1/1 Running 0 20m
deck-c554d4f8f-zgnnq 1/1 Running 0 20m
hook-fcb784985-9bfhw 0/1 CrashLoopBackOff 9 20m
hook-fcb784985-wdspz 0/1 CrashLoopBackOff 11 20m
horologium-9cb787cd8-pwqw9 1/1 Running 0 20m
jenkins-x-chartmuseum-67f75c4884-gsv7t 1/1 Running 0 20m
jenkins-x-controllerbuild-69d77dbfb7-jdnzx 1/1 Running 0 20m
jenkins-x-controllerrole-777546f5c-d7vlm 1/1 Running 0 20m
jenkins-x-docker-registry-589d4d79c4-87bbj 1/1 Running 0 20m
jenkins-x-heapster-66754b8fc5-psrw8 2/2 Running 0 20m
jenkins-x-nexus-5d7f85c5c5-84rk5 0/1 CrashLoopBackOff 7 20m
jenkins-x-nexus-67ffc76bc7-dpd6p 0/1 CrashLoopBackOff 7 17m
pipeline-5ddf8d5979-vgh96 1/1 Running 0 20m
pipelinerunner-7c7b666998-zlj2b 0/1 CrashLoopBackOff 9 20m
plank-85f78549d8-tpkvt 1/1 Running 0 20m
sinker-65c4b9b567-s7npn 1/1 Running 0 20m
tekton-pipelines-controller-84c476bb4d-9zm28 1/1 Running 0 21m
tekton-pipelines-webhook-dc78bb98f-qfk2k 1/1 Running 0 21m
tide-667ff77444-fg9l8 0/1 CrashLoopBackOff 11 20m

Hook pod error :
Warning Unhealthy 20m (x8 over 22m) kubelet, cdf-worker1.hpeswlab.net Liveness probe failed: Get http://172.16.72.173:8888/: dial tcp 172.16.72.173:8888: connect: connection refused
Warning Unhealthy 13m (x16 over 22m) kubelet, cdf-worker1.hpeswlab.net Readiness probe failed: Get http://172.16.72.173:8888/: dial tcp 172.16.72.173:8888: connect: connection refused
Normal Pulled 8m16s (x8 over 21m) kubelet, cdf-worker1.hpeswlab.net Container image "gcr.io/jenkinsxio/prow/hook:v20200107-aaa0608" already present on machine
Warning BackOff 3m21s (x70 over 19m) kubelet, cdf-worker1.hpeswlab.net Back-off restarting failed container

Nexus pod error:
Warning FailedPostStartHook 14m (x4 over 15m) kubelet, cdf-worker1.hpeswlab.net Exec lifecycle hook ([/opt/sonatype/nexus/postStart.sh]) for Container "nexus" in Pod "jenkins-x-nexus-5d7f85c5c5-84rk5_jx(e9e07bf3-8325-11ea-9159-005056be56c3)" failed - error: command '/opt/sonatype/nexus/postStart.sh' exited with 137: , message: ".."
Normal Killing 14m (x4 over 15m) kubelet, cdf-worker1.hpeswlab.net Killing container with id docker://nexus:FailedPostStartHook
Warning BackOff 41s (x77 over 15m) kubelet, cdf-worker1.hpeswlab.net Back-off restarting failed container

Pipelinerunner pod error:
Warning Unhealthy 23m (x2 over 23m) kubelet, cdf-worker1.hpeswlab.net Readiness probe failed: Get http://172.16.72.170:8080/ready: dial tcp 172.16.72.170:8080: connect: connection refused
Normal Pulled 23m (x3 over 23m) kubelet, cdf-worker1.hpeswlab.net Container image "gcr.io/jenkinsxio/builder-maven:2.0.1286-625" already present on machine
Normal Created 23m (x4 over 24m) kubelet, cdf-worker1.hpeswlab.net Created container
Normal Started 23m (x4 over 24m) kubelet, cdf-worker1.hpeswlab.net Started container
Warning Unhealthy 22m (x3 over 23m) kubelet, cdf-worker1.hpeswlab.net Liveness probe failed: Get http://172.16.72.170:8080/health: dial tcp 172.16.72.170:8080: connect: connection refused
Warning BackOff 4m46s (x97 over 23m) kubelet, cdf-worker1.hpeswlab.net Back-off restarting failed container

tide pod error:
Warning Unhealthy 24m (x6 over 24m) kubelet, cdf-worker2.hpeswlab.net Liveness probe failed: Get http://172.16.39.146:8888/: dial tcp 172.16.39.146:8888: connect: connection refused
Normal Created 24m (x3 over 25m) kubelet, cdf-worker2.hpeswlab.net Created container
Normal Killing 24m (x2 over 24m) kubelet, cdf-worker2.hpeswlab.net Killing container with id docker://tide:Container failed liveness probe.. Container will be killed and recreated.
Normal Pulled 24m (x2 over 24m) kubelet, cdf-worker2.hpeswlab.net Container image "gcr.io/jenkinsxio/prow/tide:v20200107-aaa0608" already present on machine
Normal Started 24m (x3 over 25m) kubelet, cdf-worker2.hpeswlab.net Started container
Warning Unhealthy 5m36s (x32 over 24m) kubelet, cdf-worker2.hpeswlab.net Readiness probe failed: Get http://172.16.39.146:8888/: dial tcp 172.16.39.146:8888: connect: connection refused
Warning BackOff 31s (x92 over 22m) kubelet, cdf-worker2.hpeswlab.net Back-off restarting failed container

output of jx version

jx version

NAME VERSION
jx 2.0.1282
jenkins x platform 2.0.2152
Kubernetes cluster v1.13.10
kubectl v1.13.10
helm client 2.14.0
git 2.16.6
Operating System Unknown Linux distribution Linux version 3.10.0-1062.9.1.el7.x86_64 ([email protected]) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-39) (GCC) ) #1 SMP Fri Dec 6 15:49:49 UTC 2019

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Provide feedback via https://jenkins-x.io/community.
/close

@jenkins-x-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Provide feedback via https://jenkins-x.io/community.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the jenkins-x/lighthouse repository.

Was this page helpful?
0 / 5 - 0 ratings