Change resource key logic for k8s
What this PR does / why we need it:
This fix will solve the problem that the cluster-scoped resource can't be deleted when we set the namespace in app.pipecd.yaml. context: https://github.com/pipe-cd/pipecd/issues/4269#issuecomment-2104055928
livestate side
- use "" for the cluster-scoped resource
- use the namespace for the namespace scoped resource
When reading the manifests from git
- use "" for the cluster-scoped resource
- for the namespace scoped resource,
- use
spec.input.namespacein app.pipecd.yaml if it is set. - use the namespace on the manifest from git if it is set.
- use
defaultif both ofspec.input.namespaceand the namespace are not set
Currently, we use the resource key to identify each k8s resource.
It is created by MakeResourceKey, and the rule is below.
livestate side
- use
defaultwhen the resource obj doesn't have the namespace. - use the namespace when the resource obj has the namespace.
When reading the manifests from git
- use
spec.input.namespacein app.pipecd.yaml if it is set. - use the namespace on the manifest from git if it is set.
- use
defaultwhen the namespace on the manifest is "".
This rule doesn't consider the cluster-scoped resource.
So for example, cluster-scoped resource don't have any namespace, but if we set the spec.input.namespace in the app.pipecd.yaml, it sets the value as the namespace to the resource key.
This causes the problem that can't prune the resource because the resource key can't identify the resource correctly. So I fixed the the logic like below.
livestate side
- use "" for the cluster-scoped resource
- use the namespace for the namespace scoped resource
When reading the manifests from git
- use "" for the cluster-scoped resource
- for the namespace scoped resource,
- use
spec.input.namespacein app.pipecd.yaml if it is set. - use the namespace on the manifest from git if it is set.
- use
defaultif both ofspec.input.namespaceand the namespace are not set
Which issue(s) this PR fixes:
Part of #4269
-
Does this PR introduce a user-facing change?:
-
How are users affected by this change:
-
Is this breaking change:
-
How to migrate (if breaking change):
The Overview of the fix
refine the namespace by loader.refineNamespace when reading the manifests from git repo
- fixed to obtain information to determine whether each k8s resource is cluster scoped when executor, planner, plan preview, and detector start. (
provider.GetIsNamespacedResources)
remove the logic to determine the namespace from MakeResourceKey
- I think
MakeResourceKeyshouldn't determine the namespace. This is only loader's responsibility.
@khanhtc1202 @t-kikuc This issue is complex and my explanation may be insufficient. If you have any questions, please feel free to ask. :) 🙏
@ffjlabo some testcases are failed :eyes:
@khanhtc1202 Sorry, I fixed some of them. 🙏 There are some failings, but these are under the /pipecdv1. I want to fix it after the current implementation looks good.
/review
PR Analysis
Main theme
Enhancement and Refactoring
PR summary
This PR introduces enhancements and refactoring to various components that interact with Kubernetes resources, such as planners, appliers, and detection mechanisms. It involves propagating the knowledge of namespaced versus non-namespaced resources through the system.
Type of PR
Enhancement, Refactoring
PR Feedback:
General suggestions
This PR contains quite a large set of changes across multiple components, focusing on ensuring that the system properly respects namespaced and cluster-wide resources in Kubernetes. While the PR introduces a central mechanism to define whether a GroupVersionKind is namespaced, it's vital to ensure that this information remains consistent and the system's behavior follows the cluster's actual state. Testing these changes in a staging environment before release would be highly recommended to catch any edge cases or regressions.
Code feedback
-
relevant file:
pkg/app/piped/platformprovider/kubernetes/loader.gosuggestion: |- It is essential to guarantee that the information regarding namespaced resources is always fresh and accurate. As this is probably cached information and the cluster's state may change (e.g., CRDs being added/removed), consider adding a refresh mechanism to update theisNamespacedResourcesmap periodically or when a certain type of error suggests it might be out of date. (important) relevant line:+ isNamespacedResources: make(map[schema.GroupVersionKind]bool), -
relevant file:
pkg/app/piped/platformprovider/kubernetes/loader.gosuggestion: |- There appears to be some unused or deleted code in theLoadManifestsmethod, specifically regarding the initiation of Nashorn VM that was previously removed. Ensure that any references or initialization logic related to removed components are cleaned up to avoid confusion. (medium) relevant line:l.initOnce.Do(func() { -
relevant key:
pkg/app/piped/platformprovider/kubernetes/loader.gosuggestion: |- Ensure that error messages are informative and actionable. For example, when handling an 'unknown resource kind', the error message should suggest what actions a developer might take or indicate if this is an expected case during normal operations or an exceptional scenario. (medium) relevant line:+ return fmt.Errorf("unknown resource 登录 [PII] %s", m.u.GroupVersionKind().String())
Security concerns:
no
The code changes introduced in this PR do not seem to introduce any obvious security concerns such as SQL injection, XSS, or CSRF vulnerabilities. The updates mostly deal with internal configuration and state management concerning Kubernetes resources. However, the security of the mechanism used to determine the namespaced status of Kubernetes resources should be considered to ensure that it cannot be misused or lead to incorrect assumptions about resource accessibility and isolation.
@ffjlabo Please rebase this PR due to pipedv1 package change 👀
Codecov Report
Attention: Patch coverage is 10.05291% with 170 lines in your changes missing coverage. Please review.
Project coverage is 29.31%. Comparing base (
b2dca47) to head (5a83716).
Additional details and impacted files
@@ Coverage Diff @@
## master #4916 +/- ##
==========================================
- Coverage 29.39% 29.31% -0.09%
==========================================
Files 322 323 +1
Lines 40852 40984 +132
==========================================
+ Hits 12010 12014 +4
- Misses 27882 28008 +126
- Partials 960 962 +2
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
The failing test is solved by this PR 🙏 https://github.com/pipe-cd/pipecd/pull/4926
I tested it using QuickSync with the scenario below, and it works well.
scenario:
tried the following operations with and without setting spec.input.namespace in app.pipecd.yaml for cluster scoped resource and namespaced resource.
- create a k8s app
- update manifests
- fix the actual resource with
kubectl edit - delete manifests
I chose and tried them with ClusterRole (cluster scoped) and Deployment (namespaced) resources.
checkpoint:
- create, update, and delete the resources successfully
- reflect the operations result on the application livestate (check on the Web UI)
- detect OutOfSync after
kubectl edit
Thank you for reviewing 🙏