Kubernetes Mode Service Account doesn't propagate down to workers and provides default service account
Checks
- [X] I've already read https://docs.github.com/en/actions/hosting-your-own-runners/managing-self-hosted-runners-with-actions-runner-controller/troubleshooting-actions-runner-controller-errors and I'm sure my issue is not covered in the troubleshooting guide.
- [X] I am using charts that are officially provided
Controller Version
0.9.3
Deployment Method
Helm
Checks
- [X] This isn't a question or user support case (For Q&A and community support, go to Discussions).
- [X] I've read the Changelog before submitting this issue and I'm sure it's not due to any recently-introduced backward-incompatible changes
To Reproduce
Spin up your basic quick start workflow, you'll see the runner has a service account, but the workers with the suffix "workflow" are given the default service account.
Describe the bug
We're running into severe permissions issues on our workflows because there is no way to provide the workers with the same service account as our scale-set.
Describe the expected behavior
I would expect the expected behaviour to be that there should be some way to add an option where you can choose to mimic the service account if need be. This complicates things greatly as we do not want to add the IRSA that is used for the runners as the default SA.
Additional Context
N/A
Controller Logs
N/A
Runner Pod Logs
N/A
Hello! Thank you for filing an issue.
The maintainers will triage your issue shortly.
In the meantime, please take a look at the troubleshooting guide for bug reports.
If this is a feature request, please review our contribution guidelines.
Hello! We really appreciate if you could fix this issue. I'm in the same scenario, let me share an screenshoot....
Hi team :)
We found a workaround for this issue by populating the ACTIONS_RUNNER_CONTAINER_HOOK_TEMPLATE env variable, but we're gonna open a pull request next week to the hooks repository in order to make this automatic! 🚀
@Hchanni what you need to do is pointing the ACTIONS_RUNNER_CONTAINER_HOOK_TEMPLATE variable to a .yaml file that contains the "extras" you need to inject into the -workflow container.
You could put that in a ConfigMap, and mount it in a file in your RunnerScaleSet config.
--- # runnerset-cm.yaml
kind: ConfigMap
apiVersion: v1
metadata:
name: our-runnerset-additional-config
data:
override.yaml: |
spec:
serviceAccountName: our-runnerset-serviceaccount-name
# runnerset.yaml
...
env:
- name: ACTIONS_RUNNER_CONTAINER_HOOK_TEMPLATE
value: /home/runner/k8s/additionalPodTemplate.yaml
volumeMounts:
- name: pod-additional-config
mountPath: /home/runner/k8s/additionalPodTemplate.yaml
volumes:
- name: pod-additional-config
configMap:
name: our-runnerset-additional-config
Hey @marcopalmisano, this was exactly the intended way to do it!
Closing this issue since it is related to the container hook, and also, it is working as intended. The workflow pod shouldn't by default have the ability to create new pods in your cluster. If you want to do it, you should use the hook extension ☺
Thank you @nikola-jokic! 🙏 We have opened a merge request to enable this behaviour months ago but it's still unseen :(
Are you the intended person to take a look at it? 👀