azure-vm-agents-plugin
                                
                                 azure-vm-agents-plugin copied to clipboard
                                
                                    azure-vm-agents-plugin copied to clipboard
                            
                            
                            
                        Inbound: support for TCP tunnel
What feature do you want to see added?
Same as https://github.com/jenkinsci/azure-vm-agents-plugin/issues/421, except I would want to use a TCP tunnel (like permanent or Kubernetes agents are doing).
Upstream changes
No response
You have full control of how you start the agent.
Just change your systemd service ExecStart to:
ExecStart=/usr/bin/java -cp /home/jenkins/inbound-agent/agent.jar hudson.remoting.jnlp.Main -headless -direct ${TUNNEL_ADDRESS} -workDir /home/jenkins/work -instanceIdentity ${INSTANCE_IDENTITY} @/home/${USER}/inbound-agent/agent-secret ${NODE_NAME}
Thanks! It means it is technically feasible: I thought settings such as TCP tunnel, websockets, etc. would have to be set up when the "node" object is created in Jenkins side. Gotta try!
However, being able to configure this using the UI would improve the UX for users: same experience as a "permanent inbound agent" and the Kubernetes agents. Otherwise, we are rooting for yet another cloud and another way to configure agents. Does it make sense (even if not urgent or need another contributor)?
I'm not sure what the UI for permanent agents does in this case.
For a tunnel to make sense I would have thought you need to use the tunnel to connect to the controller. But the instructions in the UI just have it using the controllers JNLP file just the same as without the tunnel.
I guess you could have the controllers https port exposed but want to tunnel for the TCP port, but that just seems a bit odd.
I guess you could have the controllers https port exposed but want to tunnel for the TCP port, but that just seems a bit odd.
I just had the case with trusted.ci.jenkins.io, the private controller used to generate update center and RPU data.
It is accessed with an SSH tunnel with the localhost's 1443 port of users pointing to the 443 of the service.
Jenkins location being set to 1443, the jnlpUrl or HTTP direct access are all using the wrong port, while the 50000 is directly available (as a private port).
Of course there are other solution in this case (websocket I guess or changing the HTTPS port to 1443) but that is a real life example that other users could have on less flexible systems.
That sounds like a misconfigured service, if users are expected to access it on 1443 Jenkins should be configured for that.
Port forwarding to access Jenkins is a very unusual use-case I would say at least for normal operation to access it.
Most users would have it accessible on a VPN or behind a proxy of some sort
However, being able to configure this using the UI would improve the UX for users: same experience as a "permanent inbound agent"
This was removed in Jenkins core in https://github.com/jenkinsci/jenkins/pull/8762
Documentation is being updated in https://github.com/jenkinsci/azure-vm-agents-plugin/pull/607