community
community copied to clipboard
ConfigMap as a ValueSource in Param in TaskRuns and PipelineRuns
This proposal is to be able to reference ConfigMap as a value source for a Param in TaskRun or PipelineRun . This is to support Kubernetes native options (ConfigMap) as value source along with direct value passed to TaskRun and PipelineRun.
Note: this proposal is basically picking up upon the closed (unmerged) proposal.
For whatever it's worth, I've made the code change in my fork (I have yet to add documentation and tests) https://github.com/mostafaCamel/pipeline/commit/0a3f5169e6c0f21a5a72dc3e6fc1a5b7f6dd7aa7
added the TEP as an agenda item to be discussed in the upcoming meeting
Thanks Vibhav! I am new to the project so let me know if I need to/can attend
@mostafaCamel you are welcome to join the working group calls ! Check the github.com/tektoncd/community repo for more info 🙂
/assign
/assign
Hi @vdemeester @waveywaves is there an ETA for the review?
fyi ... when we implemented this in Shipwright, we used a different approach to access the value eliminating the need to load it but delegating this to Kubernetes. That way, we are also doing this for Secrets, which through the way it is proposed here, certainly is not possible. On the other hand, our approach limits the usage of such parameters to only cmd, args and env.
https://github.com/shipwright-io/community/blob/main/ships/0025-strategy-parameters-enhancements.md
fyi ... when we implemented this in Shipwright, we used a different approach to access the value eliminating the need to load it but delegating this to Kubernetes. That way, we are also doing this for Secrets, which through the way it is proposed here, certainly is not possible. On the other hand, our approach limits the usage of such parameters to only cmd, args and env.
https://github.com/shipwright-io/community/blob/main/ships/0025-strategy-parameters-enhancements.md
My understanding Tekton tasks already support value sources (config maps and secrets) for cmd, args and env at the container/step level https://github.com/tektoncd/pipeline/blob/47bcb0937f2882879c1a5e11492203fe618b730c/docs/variables.md?plain=1#L92
fyi ... when we implemented this in Shipwright, we used a different approach to access the value eliminating the need to load it but delegating this to Kubernetes. That way, we are also doing this for Secrets, which through the way it is proposed here, certainly is not possible. On the other hand, our approach limits the usage of such parameters to only cmd, args and env. https://github.com/shipwright-io/community/blob/main/ships/0025-strategy-parameters-enhancements.md
My understanding Tekton tasks already support value sources (config maps and secrets) for cmd, args and env at the container/step level https://github.com/tektoncd/pipeline/blob/47bcb0937f2882879c1a5e11492203fe618b730c/docs/variables.md?plain=1#L92
For env, sure. That's Kube machinery. I just said that we in Shipwright used that Kube capability to read those parameters that a Shipwright user defined on BuildRun level to come from a secret or configMap into environment variables on the TaskRun that we create for that BuildRun (with exactly what you point to) and then we reference the environment variable value in the param value using $(ENV_VAR) syntax.
Theoretically, Tekton could do the same one level deeper when it replaces the values in the PodSpec. Instead of replacing $(param.my-param) with the value from the ConfigMap key, you would replace it with $(SOME_ENV_VAR_THAT_TEKTON_ADDS) and in the env section of the container, you load the value of that env var from the ConfigMap. And that also works for secrets because you don't have to put the plain value into the Pod spec.
But is just a hint for another option that you may consider or not.
Ah ok, I now understand. Thanks for the explanation!
Rebased the branch and added a new commit to address the reviews.
- The reconciler will no longer unset param.Value after the first reconciler run
I added an explanation in the risks/mitigation section
[APPROVALNOTIFIER] This PR is APPROVED
This pull-request has been approved by: vdemeester
The full list of commands accepted by this bot can be found here.
The pull request process is described here
- ~~teps/OWNERS~~ [vdemeester]
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
The eventual getter method I would need to implement (to get either the user-provided param.Value or the resolved-then-stored-somewhere value from the value source) will be used to replace every invocation similar to this (i.e every time the code tries to access param.Value from pr.Spec.Params
Proposed solution
TLDR: A new boolean field pipelineRun.Status.AttemptedValueSourceResolution set to true during the first reconcile (along with querying the configmap and populating the value). I have a a poc on a local breanch: The CRD update will occur without any issue or any need to override (for the param value and for Status.AttemptedValueSourceResolution ). In subsequent reconciles, nothing will occur given that the querying of the configmap and the populating is guarded behind if !pipelineRun.Status.AttemptedValueSourceResolution
This status check is needed to query the ConfigMap only once and to avoid the Once the PipelineRun has started, only status updates are allowed error
Why the mutating webhook is probably not a good solution?
The webhook is unaware of all the non-tekton-pipelines namespaces in the cluster so will be unable to read the user-defined configmaps (in the default namepsace or in other non-tekton-pipelines namespaces). Otherwise, it would have been an attractive solution given that the webhook calls our SetDefaults in which we can leverage apis.IsInCreate(ctx) (similar to here) which is true only at the first run of the webhook on the PipelineRun object
clientset.CoreV1().ConfigMaps(tr.Namespace).Get(ctx, "myhardcodedconfigmapname", metav1.GetOptions{}) will fail with configmaps "myhardcodedconfigmapname\ is forbidden: User "system:serviceaccount:tekton-pipelines:tekton-pipelines-webhook" cannot get resource "configmaps in API group "" in the namespace "default"
This due to the fact that the webhook is set to be scoped with its namespace only (as opposed to the controller which is set to know about all the namespaces in the cluster
Had a discussion with the maintainers in the Tekton working group meeting. I will look into how much work would be needed into refactoring the codebase/ implementing a getter method for pipelineRun.Spec.Param . This would open the possibility to fetch the configMap value into a field under pipelineRun.Status and not editing pipelineRun.Spec.param during the reconcile. Then, the getter method can read either pipelineRun.Spec.Param or the new field under pipelineRun.Status
Apologies for the delay. I took a quick look at the feasibility of implementing a getter method for PipelineRun.PipelineRunSpec.Params. I think the mutating webhook can be the way to go (if there is no security concern about opening up access to the webhook for all the namespaces. (See the comment above)
TLDR: I don't think it is possible and I think we should discuss the idea of opening up access of the mutating webhook to all namespaces in the cluster and not just the tekton-pipelines namespace.
-
Reasons I think implementing the getter method is not feasible/worth the hassle:
- There is no way to enforce future developers to use the getter method instead of accessing the
PipelineRun.PipelineRunSpec.Paramsfield directly. If we switch Params to be a private field (hence forcing the use of the getter method), this field will no longer be (un)marshalled.- There seems to be an obtuse way to customize the marshalJSON method to achieve the the marshallability of private fields but not sure if this change is worth the final goal
- There is a non-std go library to (un)marshal private fields but it does not seem broadly adopted in the go community
- Changing the method signature of many helper functions
- Example: func (ps *PipelineRunSpec) validatePipelineRunParameters(ctx context.Context) (which fetches the Params ps.Params) is called by func (ps *PipelineRunSpec) Validate ..... If we want to access the actual param, we need to make these functions aware of PipelineRun.Status
- Theoretically, we can make this change as PipelineRunSpec is not a k8s CRD so there is no "hard" requirement for the method signature
func (ps *PipelineRunSpec) Validate(ctx context.Context)but again I don't think the final goal is worth breaking the consistency of these method signatures
- There is no way to enforce future developers to use the getter method instead of accessing the
-
In terms of number of uses of pipelineRunSpec.Params, I was pleasantly surprised there are not that many of them. See screenshot for the references of v1.PipelineRunSpec.Params (the vscode extension is probably using this analysis package)

@mostafaCamel the PR needs a rebase. @tektoncd/core-maintainers can you review this ? Overall, I agree with the feature, but we need to figure things out implementation wise. It shouldn't block the TEP though (and we could reduce the implementation detail appearing on it if it does block merging this).
Rebased and force-pushed. Apologies Vincent for the delay on my end as well. Will try to have a PR for the implementation by the end of the next month.
@mostafaCamel: PR needs rebase.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.