helm-charts
helm-charts copied to clipboard
Add support for adding more clouds via values.yaml
Is your feature request related to a problem? Please describe
At the moment existing helm-chart contains the only cloud configuration (kubernetes). It is included into jenkins.casc.defaults
definition within _helpers.tpl
. So if I as a Jenkins administrator want to have in addition to kubernetes cloud additional cloud like EC2 Fleet
cloud provided by ec2-fleet-plugin
I have no simple possibility to do that using current version. The case of forking the official helm-chart and maintaining our own is out of scope.
All the default configuration is generated when JCasC.defaultConfig == true
. There is a possibility of overriding entire jenkins configuration setting it to false. But the one might consider it cumbersome (due to maintainability efforts during upgrades) just for the sake of adding one additional cloud.
Describe the solution you'd like
We would like to have a possibility of adding additional clouds into jcasc config via variables similar to .Values.agent.podTemplates
so default kubernetes cloud exists and there is a way to add more in a loop.
Describe alternatives you've considered
Using helm template
command in conjunction to kustomize build
does not work for us because entire jcasc configuration is provided as the only string type field of the configmap. In this case we have to provide entire config there which adds confusions and does not make life easier.
{{- if .Values.controller.JCasC.defaultConfig }}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "jenkins.fullname" $root }}-jenkins-jcasc-config
namespace: {{ template "jenkins.namespace" $root }}
labels:
"app.kubernetes.io/name": {{ template "jenkins.name" $root}}
{{- if .Values.renderHelmLabels }}
"helm.sh/chart": "{{ $root.Chart.Name }}-{{ $root.Chart.Version }}"
{{- end }}
"app.kubernetes.io/managed-by": "{{ $.Release.Service }}"
"app.kubernetes.io/instance": "{{ $.Release.Name }}"
"app.kubernetes.io/component": "{{ $.Values.controller.componentName }}"
{{ template "jenkins.fullname" $root }}-jenkins-config: "true"
data:
jcasc-default-config.yaml: |-
{{- include "jenkins.casc.defaults" . |nindent 4 }}
{{- end}}
{{- end }}
Additional context
No response
Can you just add extras in your own JCasC
block? Lists should merge fine I believe. https://github.com/jenkinsci/helm-charts/tree/main/charts/jenkins#configure-security-realm-and-authorization-strategy
I'm having the same issue. Although when I tried to add extra's using the JCasC block it doesn't seem to add the clouds using that either.
As I mentioned in the description. There is no easy way for it. It is cumbersome. The one needs to:
- disable default config generation.
- Provide all the config which is in default config adding them one by one via JCasC blocks which is not convenient and implies huge maintenance efforts during upgrades.
@oleksandr-openweb . Can you show me your JCasC block with all the defaults you have added one by one (including multiple kubernetes clouds)? I know this is not a great workaround, but at least you have something working.
@wrender the main issue in manually tweaking clouds section:
# Source: jenkins/templates/jcasc-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: cicd-jenkins-jenkins-jcasc-config
namespace: jenkins
labels:
"app.kubernetes.io/name": jenkins
"app.kubernetes.io/managed-by": "Helm"
"app.kubernetes.io/instance": "cicd"
"app.kubernetes.io/component": "jenkins-controller"
cicd-jenkins-jenkins-config: "true"
data:
jcasc-default-config.yaml: |-
jenkins:
authorizationStrategy:
projectMatrix:
permissions:
# jcasc needs the admin group so it can reload the config yamls
- "USER:Overall/Administer:admin"
# Job
- "GROUP:Job/Build:Backend"
- "GROUP:Job/Build:QA"
- "GROUP:Job/Cancel:Backend"
- "GROUP:Job/Cancel:QA"
- "GROUP:Job/Read:Backend"
- "GROUP:Job/Read:QA"
# Lockable Resources
- "GROUP:Lockable Resources/View:Backend"
- "GROUP:Lockable Resources/View:QA"
# Metrics
- "GROUP:Metrics/View:Backend"
- "GROUP:Metrics/View:QA"
# Overall
- "GROUP:Overall/Administer:DevOps"
- "GROUP:Overall/Read:Backend"
- "GROUP:Overall/Read:QA"
# Run
- "GROUP:Run/Delete:Backend"
- "GROUP:Run/Delete:QA"
- "GROUP:Run/Replay:Backend"
- "GROUP:Run/Replay:QA"
- "GROUP:Run/Update:Backend"
- "GROUP:Run/Update:QA"
# View
- "GROUP:View/Read:Backend"
- "GROUP:View/Read:QA"
securityRealm:
saml:
binding: "urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect"
displayNameAttributeName: "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name"
emailAttributeName: "email"
groupsAttributeName: "http://schemas.xmlsoap.org/claims/Group"
idpMetadataConfiguration:
period: 0
xml: "here we have SAML XML entity descriptor as a string"
maximumAuthenticationLifetime: 86400
usernameCaseConversion: "lowercase"
disableRememberMe: true
mode: NORMAL
numExecutors: 0
labelString: ""
projectNamingStrategy: "standard"
markupFormatter:
rawHtml:
disableSyntaxHighlighting: true
clouds:
- kubernetes:
containerCapStr: "10"
jenkinsTunnel: "jenkins-agent.jenkins.svc.cluster.local:50000"
jenkinsUrl: "http://jenkins.jenkins.svc.cluster.local:8080"
name: "kubernetes"
namespace: "jenkins-agents"
podLabels:
- key: "jenkins/jenkins-agent"
value: "true"
serverUrl: "https://kubernetes.default"
templates:
- containers:
- args: "^${computer.jnlpmac} ^${computer.name}"
command: "sleep"
envVars:
- envVar:
key: "JENKINS_URL"
value: "http://jenkins.jenkins.svc.cluster.local:8080/"
image: "jenkins/inbound-agent:4.11.2-4"
livenessProbe:
failureThreshold: 0
initialDelaySeconds: 0
periodSeconds: 0
successThreshold: 0
timeoutSeconds: 0
name: "jnlp"
resourceLimitCpu: "512m"
resourceLimitMemory: "512Mi"
resourceRequestCpu: "512m"
resourceRequestMemory: "512Mi"
workingDir: "/home/jenkins/agent"
id: "ec0bd947865a67eab1b1c7e3f4edf31b9e1e67926f020a66fb783c929602592d"
label: "cicd-jenkins-agent"
name: "default"
namespace: "jenkins-agents"
nodeUsageMode: "NORMAL"
podRetention: "never"
serviceAccount: "default"
slaveConnectTimeout: 100
slaveConnectTimeoutStr: "100"
yamlMergeStrategy: "override"
- eC2Fleet:
addNodeOnlyIfRunning: false
alwaysReconnect: false
cloudStatusIntervalSec: 10
computerConnector:
sSHConnector:
credentialsId: "ec2-fleet-ssh-key"
launchTimeoutSeconds: 60
maxNumRetries: 10
port: 22
retryWaitTime: 15
sshHostKeyVerificationStrategy: "nonVerifyingKeyVerificationStrategy"
disableTaskResubmit: false
fleet: "cicd-jenkins-agents-ec2-fleet"
idleMinutes: 0
initOnlineCheckIntervalSec: 15
initOnlineTimeoutSec: 180
labelString: "ec2-fleet-spot-agents"
maxSize: 1
maxTotalUses: -1
minSize: 1
minSpareSize: 0
name: "FleetCloud"
noDelayProvision: false
numExecutors: 1
oldId: "af9f5077-127b-4e03-a882-343f7178a6ab"
privateIpUsed: false
region: "us-east-1"
restrictUsage: false
scaleExecutorsByWeight: false
crumbIssuer:
standard:
excludeClientIPFromCrumb: true
security:
apiToken:
creationOfLegacyTokenEnabled: false
tokenGenerationOnCreationEnabled: false
usageStatisticsEnabled: true
scriptApproval:
approvedSignatures:
- "method net.sf.json.JSON toString int"
unclassified:
location:
adminAddress: [email protected]
url: https://jenkins.example.com/
I think what I'm doing is a bit different. I'm trying to just define multiple kubernetes clouds. Something like this:
jenkins:
clouds:
- kubernetes:
jenkinsTunnel: "jenkins-agent.jenkins.svc.cluster.local:50000"
jenkinsUrl: "http://jenkins.jenkins.svc.cluster.local:8080"
name: "kubernetes"
namespace: "jenkins-agents"
podLabels:
- key: "jenkins/jenkins-agent"
value: "true"
serverUrl: "https://kubernetes.default"
- kubernetes:
jenkinsTunnel: "myjenkins.domain.local:50000"
jenkinsUrl: "http://myjenkins.domain.local:8080"
name: "second-kubernetes-cluster"
namespace: "jenkins-agents"
podLabels:
- key: "jenkins/jenkins-agent"
value: "true"
serverUrl: "https://remotecluster.blah.blah.blah"
But this doesn't seem to work....
@wrender I think your indention needs to be fixed:
jenkins:
clouds:
- kubernetes:
jenkinsTunnel: "jenkins-agent.jenkins.svc.cluster.local:50000"
jenkinsUrl: "http://jenkins.jenkins.svc.cluster.local:8080"
name: "kubernetes"
namespace: "jenkins-agents"
podLabels:
- key: "jenkins/jenkins-agent"
value: "true"
serverUrl: "https://kubernetes.default"
- kubernetes:
jenkinsTunnel: "myjenkins.domain.local:50000"
jenkinsUrl: "http://myjenkins.domain.local:8080"
name: "second-kubernetes-cluster"
namespace: "jenkins-agents"
podLabels:
- key: "jenkins/jenkins-agent"
value: "true"
serverUrl: "https://remotecluster.blah.blah.blah"
Thanks @torstenwalter . I'll try this out tomorrow to see if it resolves the issue.
Does anyone happen to have a working values.yaml file that shows settings needed for a local kubernetes cluster, and also a remote kubernetes cluster? For the life of me I can't seem to get the agent on the remote kubernetes cluster to load correctly. It starts up, but then just continuously creates, and terminates pods without actually running the workload. I've tried websocket, and ensuring that tcp port 50000 is open on the loadbalancer, but no luck.
@torstenwalter the helm throws an error when I try to deploy this with 4 space indent. Under the Jasc section of the helm chart.
this is how we were able to add an extra cloud. the main thing is that it needs to be under the configScripts and the extra cloud was called nv-cloud here.
cloudName: "kubernetes"
JCasC:
defaultConfig: true
configScripts:
welcome-message: |
jenkins:
systemMessage: Welcome to Jenkins, This Jenkins is configured and managed 'as code'.
active-directory: |
jenkins:
securityRealm:
azure:
cacheDuration: 3600
matrix-auth: |
jenkins:
authorizationStrategy:
azureAdMatrix:
permissions:
nv-cloud: |
jenkins:
clouds:
- kubernetes:
addMasterProxyEnvVars: true
containerCap: 10
containerCapStr: "10"
credentialsId:
jenkinsTunnel:
jenkinsUrl:
name: "kubernetes-nv"
namespace: "jenkins-agents"
Thanks @torstenwalter and @typeBlkCofe . I was able to get it working. I was just missing the pipe and 4 space indent.
It seems like either the values.yaml should indicate how to add additional clouds using JCasC or the default cloud should be added using JCasC from the start? Seems overly confusing to a new Jenkins users trying to install the software and configure it properly.
Agreed that values.yaml should include the example of additional clouds. This is a common use case.
I'm new to this, what is the rationale for managing multiple clusters from the single kubernetes instance rather than configuring jenkins on each instance? You can still abstract out all of the shared code through the default values.yaml
and specify any cluster-specific configs in a separate values file
is there actually a way to disable the default kubernetes cloud? and keep only the kuberntes cloud defined in the CasC ?
this is how we were able to add an extra cloud. the main thing is that it needs to be under the configScripts and the extra cloud was called nv-cloud here.
cloudName: "kubernetes" JCasC: defaultConfig: true configScripts: welcome-message: | jenkins: systemMessage: Welcome to Jenkins, This Jenkins is configured and managed 'as code'. active-directory: | jenkins: securityRealm: azure: cacheDuration: 3600 matrix-auth: | jenkins: authorizationStrategy: azureAdMatrix: permissions: nv-cloud: | jenkins: clouds: - kubernetes: addMasterProxyEnvVars: true containerCap: 10 containerCapStr: "10" credentialsId: jenkinsTunnel: jenkinsUrl: name: "kubernetes-nv" namespace: "jenkins-agents"
This worked for me, but I can only add a single cloud otherwise I get this error:
java.lang.IllegalArgumentException: Single entry map expected to configure a hudson.slaves.Cloud
How can we add multiple clouds with ConfigurationAsCode?? Here's the code I tried that gave the error above:
(ignore bad spacing formatting)
JCasC:
defaultConfig: true
configScripts:
cloud1: |
jenkins:
clouds:
- kubernetes:
name: "k8s-prod1"
...
cloud2: |
jenkins:
clouds:
- kubernetes:
name: "k8s-infra"
...
I also tried just adding multiple - kubernetes:
blocks back to back under the same area like this below, but it yielded the same error.
nv-cloud: |
jenkins:
clouds:
- kubernetes:
- kubernetes:
Hey all, Do we have any documentation on how to setup a vsphere cloud? I am trying to add three clouds, 2 kubernetes and one vsphere cloud, but I see any documentation on what the yaml is supposed to look like for vsphere? anyone have any ideas on how to do that?