this is a problem:



I am also facing the similar issue, Were you able to resolved the problem?
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.
This issue is being automatically closed due to inactivity.
To fix this set the below env in the jnlp container.
env:
- name: JENKINS_URL
value: http://jenkins.operations:8080
- name: JENKINS_TUNNEL
value: jenkins-agent.operations:50000
Why is there a need for a second service to access the jenkins container from agent pods?
Why is there a need for a second service to access the jenkins container from agent pods?
Good question. An explanation for this extra service would be nice as the hassle of defining "JENKINS_TUNNEL" seems unnecessary.
I think it's not necessary to add this manually. It seems to be added automatically by the jenkins kubernetes plugin:
BTW: The helm chart just uses the kubernetes plugin so a better place to look for documentation regarding that would be the repository https://github.com/jenkinsci/kubernetes-plugin/.
This issue isn't related to the Kubernetes plugin. The Helm chart deploys a service named jenkins-agent regardless of whatever you set the chart value agent.enabled to. This in turn forces you to define the Kubernetes plugin setting jenkinsTunnel to match the Kubernetes service name created by the chart.
If the chart just simply created a single Kubernetes service that listened on ports 8080 and 50000 and pointed back to the master, one would not need to define jenkinsTunnel.
Took me a while to look into this. You are correct service configuration is up to the chart.
I checked the latest chart version. There is no need to specify the jenkinsTunnel as it's configured globally under Manage Jenkins -> Configure System -> Cloud -> Kubernetes -> Jenkins tunnel.
One I remove that setting the error message looks like:
SEVERE: http://jenkins.jenkins.svc.cluster.local:8080 provided port:50000 is not reachable
java.io.IOException: http://jenkins.jenkins.svc.cluster.local:8080 provided port:50000 is not reachable
at org.jenkinsci.remoting.engine.JnlpAgentEndpointResolver.resolve(JnlpAgentEndpointResolver.java:286)
at hudson.remoting.Engine.innerRun(Engine.java:523)
at hudson.remoting.Engine.run(Engine.java:474)
If the service would also listen on port 50000 then it would work.
You asked why it's not done like this and instead there are two services. Honest answer: I don't know. I can just assume that one wanted to separate the two ports to prevent port 50000 to be exposed outside the cluster.
Looks like this is the explanation:
# Kubernetes service type for the JNLP slave service
# slaveListenerServiceType is the Kubernetes Service type for the JNLP slave service,
# either 'LoadBalancer', 'NodePort', or 'ClusterIP'
# Note if you set this to 'LoadBalancer', you *must* define annotations to secure it. By default
# this will be an external load balancer and allowing inbound 0.0.0.0/0, a HUGE
# security risk: https://github.com/kubernetes/charts/issues/1341
slaveListenerServiceType: "ClusterIP"
Took me a while to look into this. You are correct service configuration is up to the chart.
I checked the latest chart version. There is no need to specify the
jenkinsTunnelas it's configured globally underManage Jenkins -> Configure System -> Cloud -> Kubernetes -> Jenkins tunnel.One I remove that setting the error message looks like:
SEVERE: http://jenkins.jenkins.svc.cluster.local:8080 provided port:50000 is not reachable java.io.IOException: http://jenkins.jenkins.svc.cluster.local:8080 provided port:50000 is not reachable at org.jenkinsci.remoting.engine.JnlpAgentEndpointResolver.resolve(JnlpAgentEndpointResolver.java:286) at hudson.remoting.Engine.innerRun(Engine.java:523) at hudson.remoting.Engine.run(Engine.java:474)If the service would also listen on port 50000 then it would work.
You asked why it's not done like this and instead there are two services. Honest answer: I don't know. I can just assume that one wanted to separate the two ports to prevent port 50000 to be exposed outside the cluster.
Hello @torstenwalter,
Do you know how to put service listen on port 50000 via helm values?
I try to use extraPorts but i think its not what im looking for.
I don't know and I am not sure if that should be supported at all considering the security implications listed above.
had same issue when running jenkins in docker container and adding different host as agent to docker container.
Able to resolve this issue by adding additional port in docker-compose.yml for 50000 : 50000 (mapping host port 50000 to docker jenkins container 50000)
version: '3'
services:
jenkins:
container_name: jenkins
image: jenkins/docker
build:
context: pipeline
ports:
- "80:8080" -> for web traffic
- "50000:50000" --> for agent connection to docker container
Just make sure your port 50000 is open
I met this problem when trying to config a permanent jenkins agent(agent is located on VM out of k8s cluster), I found that websocket way works correctly. Reference: https://www.jenkins.io/blog/2020/02/02/web-socket/

Most helpful comment
I met this problem when trying to config a permanent jenkins agent(agent is located on VM out of k8s cluster), I found that websocket way works correctly. Reference: https://www.jenkins.io/blog/2020/02/02/web-socket/
