Feature Request: fleet.yaml to support helmSecretName
Is your feature request related to a problem?
in a GitRepo, it may contain several fleet.yaml's each referencing a helm chart
example repo (the )
|- cert-manager
| |- fleet.yaml <- open-source no secret required
|- service-mesh
| |- fleet.yaml <- requires A credentials
|- low-code
|- fleet.yaml <- requires B credentials
The repo in this example has 2 different secrets (A and B)
this is not supportable in the current gitRepo, there is limited support.
Solution you'd like
can we supply, the helmSecretName in the fleet.yaml, as-well (also instead) of specifying it in the gitRepo
this will allow the secret name to be specified with the context of the fleet.yaml, which specifies the helm chart
(we checked the fleet.yaml, but could not see this being a supported option)
Alternatives you've considered
multiple gitRepos configured for a git repository, per helm secret
this has a unwanted side effect, is the workloads which did not have a credential and then has a credential, would mean the bundles would be impacted, and workloads may be re-deployed when they are moved to a gitRepo which would represent the credentials.
Anything else?
No response
adding milestone to get it on our radar
Is it possible to use https://fleet.rancher.io/gitrepo-add#use-different-helm-credentials-for-each-path We also intend do add globbing to that secret.
Hi @manno - how is the secret meant to be structured? I've tried a few variations, and no matter what I do the job/pods errors.
HOSTNAME=system-control-external-workloads-7bd38-rkd4r
KUBERNETES_PORT_443_TCP_PROTO=tcp
COMMIT=b942f*************************aedae14
KUBERNETES_PORT_443_TCP_ADDR=172.20.0.1
KUBERNETES_PORT=tcp://172.20.0.1:443
PWD=/workspace/source
HOME=/fleet-home
KUBERNETES_SERVICE_PORT_HTTPS=443
GIT_SSH_COMMAND=ssh -o stricthostkeychecking=accept-new
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP=tcp://172.20.0.1:443
SHLVL=1
KUBERNETES_SERVICE_PORT=443
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
KUBERNETES_SERVICE_HOST=172.20.0.1
_=/usr/bin/env
time="2025-05-12T13:51:18Z" level=fatal msg=EOF
For example, I have a repo with a path to:
- workloads/control/external/crossplane
- workloads/control/external/custom
crossplane does not need helm credentials/authentication. custom DOES need helm credentials/authentication.
I've tried specifying only the parent path in the GitRepo - same error:
apiVersion: fleet.cattle.io/v1alpha1
kind: GitRepo
metadata:
name: system-control-external-workloads
namespace: my-namespace
spec:
branch: feature/1234
clientSecretName: github-technical-user
correctDrift:
enabled: true
force: true
helmSecretNameForPaths: helm-secret-name-for-paths
paths:
- workloads/control/external
I've tried specifying each child path in the GitRepo - same error:
...
paths:
- workloads/control/external/crossplane
- workloads/control/external/custom
For the secret, I've tried:
- no secret at all
- If the secret doesn't exist it fails silently in Fleet and doesn't build any of the bundles (even the ones that don't need helm credentials):
- a secret containing only the custom path - same error
- a secret containing all paths - same error
- the path being the the same structure as the 'path' above (i.e. workloads/control/external/custom) - same error
- the path being hyphenated (as per a bundle path rather than the repo) (i.e. workloads-control-external-custom) - same error
- the paths being relative - same error
- the paths being absolute - same error
- most of the above permutations - same error
The documentation is limited, and I've exhausted all of my guesses. Any help very much appreciated.
Thanks
The feature works, but I think we'd need to improve the documentation. I'm sharing my example here.
I have 2 helm charts, one is in a public github registry (no authentication required) and the other one is stored in a private oci registry which requires authentication.
This is the GitRepo
kind: GitRepo
apiVersion: fleet.cattle.io/v1alpha1
metadata:
name: gitrepo
namespace: fleet-local
spec:
helmSecretNameForPaths: test-multipasswd
repo: https://github.com/0xavi0/fleet-examples
branch: helm-multi-passwd
paths:
- single-cluster/test-multipasswd/no-passwd
- single-cluster/test-multipasswd/passwd
I created the secret this way:
kubectl create secret generic test-multipasswd -n fleet-local --from-file=secrets-path.yaml
IMPORTANT NOTE: The secret file must be named: secrets-path.yaml. Maybe we could change this for future versions, but the name for now is hardcoded internally and only works for secrets-path.yaml
The content of my secrets-path.yaml is:
single-cluster/test-multipasswd/passwd:
username: fleet-ci
password: foo
insecureSkipVerify: true
The documentation should explain what is the expected content of the above file (work in progress). The possible values for each path are:
username: USERNAME_VALUE
password: PASSWORD_VALUE
caBundle: CA Bundle to be used when downloading the chart
sshPrivateKey: Private key to be used when downloading the chart
insecureSkipVerify: boolean value to skip TLS
As you can see, my example requires credentials for one of the paths only.
When using a secret file with a name different to secrets-path.yaml I get the same EOF error, so I suspect that might be the cause of the issue commented above.
@jamescooke-xyz Could you please confirm you created the secret as described in the example?
(Also if the file was named secrets-path.yaml and if the contents are following the same format)
Hi @0xavi0 - I have retested, being very careful to import/configure the secret using 'secrets-path.yaml' and the content structure as advised, and I can confirm this appears to be working for me.
Thanks for coming back to me. I did feel the documentation was a little light on details so I agree it could be improved a little.
I did think I'd set it up as desired, however I don't like to create files with secrets in on my local disk, so usually hand crack them. I thought I'd followed the guide (with dummy data) and then updated the secret through the Rancher UI, but I must have made an error. I got to a point where I thought I'd tried all combinations, but as expected I had missed the correct one :)
Thanks again.