Charts: [stable/jenkins] Jenkins pod remains in init state

Created on 4 Sep 2019  路  3Comments  路  Source: helm/charts

I am trying to setup Jenkins with helm charts but Jenkins pod always remains in Init status.
I have already created PV and PVC and assigned PVC in values files. Below is my configuration:

`clusterZone: "cluster.local"

master:
componentName: "jenkins-master"
image: "jenkins/jenkins"
tag: "lts"
imagePullPolicy: "IfNotPresent"
lifecycle:
numExecutors: 0
customJenkinsLabels: []
useSecurity: true

enableXmlConfig: true
securityRealm: |-

authorizationStrategy: |-

true

hostNetworking: false
adminUser: "admin"
adminPassword: "admin"
rollingUpdate: {}
resources:
requests:
cpu: "50m"
memory: "256Mi"
limits:
cpu: "2000m"
memory: "2048Mi"
usePodSecurityContext: true
servicePort: 8080
targetPort: 8080
serviceType: NodePort
serviceAnnotations: {}
deploymentLabels: {}
serviceLabels: {}
podLabels: {}
nodePort: 32323
healthProbes: true
healthProbesLivenessTimeout: 5
healthProbesReadinessTimeout: 5
healthProbeLivenessPeriodSeconds: 10
healthProbeReadinessPeriodSeconds: 10
healthProbeLivenessFailureThreshold: 5
healthProbeReadinessFailureThreshold: 3
healthProbeLivenessInitialDelay: 90
healthProbeReadinessInitialDelay: 60
slaveListenerPort: 50000
slaveHostPort:
disabledAgentProtocols:
- JNLP-connect
- JNLP2-connect
csrf:
defaultCrumbIssuer:
enabled: true
proxyCompatability: true
cli: false
slaveListenerServiceType: "ClusterIP"
slaveListenerServiceAnnotations: {}
slaveKubernetesNamespace:

loadBalancerSourceRanges:

enableRawHtmlMarkupFormatter: false
scriptApproval:
initScripts:
jobs: {}
JCasC:
enabled: false
pluginVersion: "1.27"
supportPluginVersion: "1.18"
configScripts:
welcome-message: |
jenkins:
systemMessage: Welcome to our CI\CD server. This Jenkins is configured and managed 'as code'.

customInitContainers: []

sidecars:
configAutoReload:
enabled: false
image: shadwell/k8s-sidecar:0.0.2
imagePullPolicy: IfNotPresent
resources: {}
sshTcpPort: 1044
folder: "/var/jenkins_home/casc_configs"
nodeSelector: {}
tolerations: []
podAnnotations: {}

customConfigMap: false
overwriteConfig: false

overwriteJobs: false

ingress:
enabled: false
apiVersion: "extensions/v1beta1"
labels: {}
annotations: {}
hostName:
tls:

backendconfig:
enabled: false
apiVersion: "extensions/v1beta1"
name:
labels: {}
annotations: {}
spec: {}

route:
enabled: false
labels: {}
annotations: {}

additionalConfig: {}

hostAliases: []

prometheus:
enabled: false
serviceMonitorAdditionalLabels: {}
scrapeInterval: 60s
scrapeEndpoint: /prometheus
alertingRulesAdditionalLabels: {}
alertingrules: []

agent:
enabled: true
image: "jenkins/jnlp-slave"
tag: "3.27-1"
customJenkinsLabels: []
imagePullSecretName:
componentName: "jenkins-slave"
privileged: false
resources:
requests:
cpu: "200m"
memory: "256Mi"
limits:
cpu: "200m"
memory: "256Mi"
alwaysPullImage: false
podRetention: "Never"
envVars:
volumes:
nodeSelector: {}

command:
args:
sideContainerName: "jnlp"
TTYEnabled: false
containerCap: 10
podName: "default"
idleMinutes: 0
yamlTemplate:

persistence:
enabled: true
existingClaim: jenkins-pvc
storageClass:
annotations: {}
accessMode: "ReadWriteOnce"
size: "2Gi"
volumes:
mounts:

networkPolicy:
enabled: false
apiVersion: networking.k8s.io/v1

rbac:
create: true

serviceAccount:
create: true
name:
annotations: {}

serviceAccountAgent:
create: false
name:
annotations: {}

backup:
enabled: false
componentName: "backup"
schedule: "0 2 * * *"
annotations:
iam.amazonaws.com/role: "jenkins"
image:
repository: "nuvo/kube-tasks"
tag: "0.1.2"
extraArgs: []
existingSecret: {}
env:

  • name: "AWS_REGION"
    value: "us-east-1"
    resources:
    requests:
    memory: 1Gi
    cpu: 1
    limits:
    memory: 1Gi
    cpu: 1
    destination: "s3://nuvo-jenkins-data/backup"
    checkDeprecation: true`

Most helpful comment

@brunowego Yes, I have tested it local and this error is from the local environment. I need to run it with NodePort and without TLS. I will check the config provided by you with modification that I need.

All 3 comments

@jyotiverma03 you tested locally? Take a look at this example:

kubectl create namespace jenkins
kubectl create secret tls jenkins.tls-secret \
  --cert='/etc/ssl/certs/example/ca.crt' \
  --key='/etc/ssl/certs/example/ca.key' \
  -n jenkins
helm install stable/jenkins \
  -n jenkins \
  --namespace jenkins \
  --set master.serviceType=ClusterIP \
  --set master.ingress.enabled=true \
  --set master.ingress.hostName=jenkins.example.com \
  --set 'master.ingress.tls[0].secretName=jenkins.tls-secret' \
  --set 'master.ingress.tls[0].hosts={jenkins.example.com}' \
  --set agent.image=brunowego/jnlp-slave-s2i \
  --set agent.tag=3.29-1 \
  --set 'agent.volumes[0].type=HostPath' \
  --set 'agent.volumes[0].hostPath=/var/run/docker.sock' \
  --set 'agent.volumes[0].mountPath=/var/run/docker.sock'
kubectl rollout status deploy/jenkins -n jenkins
kubectl get secret jenkins -o jsonpath='{.data.jenkins-admin-password}' -n jenkins | base64 --decode; echo

@brunowego Yes, I have tested it local and this error is from the local environment. I need to run it with NodePort and without TLS. I will check the config provided by you with modification that I need.

@brunowego I was able to run it with node port with the help of configuration provided by you

Was this page helpful?
0 / 5 - 0 ratings