kubefed
kubefed copied to clipboard
Should the KubeFed synchronization controller overwrite the modification of the pvc spec field by the member cluster?
What happened:
In my cluster, I create the resource FederatedPersistentVolumeClaims
, and then create a cr
of fpvc
. When the pv in the member cluster is bound to pvc, the volumeName
field of pvc will be modified, but the kubefed controller will issue an error at this time.
Warning UpdateInClusterFailed federatedpersistentvolumeclaim/test1234 Failed to update PersistentVolumeClaim "test1234/test1234" in cluster "585ae638bd68": PersistentVolumeClaim "test1234" is invalid: spec: Forbidden: s
pec is immutable after creation except resources.requests for bound claims
core.PersistentVolumeClaimSpec{
AccessModes: {"ReadWriteMany"},
Selector: nil,
Resources: {Requests: {s"storage": {i: {...}, Format: "BinarySI"}}},
- VolumeName: "",
+ VolumeName: "pvc-a18b6f21-527d-452b-8570-684d8bf066ff",
StorageClassName: &"ha-nfs",
VolumeMode: &"Filesystem",
DataSource: nil,
}
But in the end, the controller did not change this field back.
pvc-examples # kubectl get pvc test1234 -ntest1234
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
test1234 Bound pvc-a18b6f21-527d-452b-8570-684d8bf066ff 512Mi RWX ha-nfs 6m1s
What you expected to happen: Maybe there shouldn't be a warning log. Perhaps should add a description hereLocal Value Retention, or whether can provide a check to skip some fields.
How to reproduce it (as minimally and precisely as possible): example yaml
apiVersion: types.kubefed.io/v1beta1
kind: FederatedPersistentVolumeClaim
metadata:
name: test1234
namespace: test1234
spec:
overrides:
- clusterName: 585ae638bd68
clusterOverrides:
- path: /spec/storageClassName
value: ha-nfs
- clusterName: 157092d9799e
clusterOverrides:
- path: /spec/storageClassName
value: nfs-fata
placement:
clusters:
- name: 585ae638bd68
- name: 157092d9799e
template:
metadata:
labels:
system/srType: share
system/storageType: nfs
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 512Mi
Anything else we need to know?: kubefed : v0.2.0-alpha.1. Haven't used the latest kubefed yet.
Environment:
-
Kubernetes version (use
kubectl version
) 1.20.8 -
KubeFed version v0.2.0-alpha.1
-
Scope of installation (namespaced or cluster) namespaced (kube-federation-system)
-
Others
/kind bug
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
@0xff-dev This is expected. The immutable fields of resources like the one you mentioned cannot be changed after they are first created. The solution to this would be including the volumeName or similar such fields (if there are any) from the pvc to skip while updating the cluster local resource.
Thanks a lot, the non-propagability of immutable fields is stated in the Limitations section of the docs. The effect of adding pvc is better.😁
This is a simple change and I will keep this issue open, to see if somebody is interested in implementing this.
/assign
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
Is the work for this issue done? if it's still open I have time to work on it.
Project is to be archived - closing all issues and PRs
see https://github.com/kubernetes/org/issues/4122 and https://groups.google.com/d/msgid/kubernetes-sig-multicluster/9f8d81d1-07d1-4985-a7bf-d76197deb971n%40googlegroups.com
for details
/close
@mrbobbytables: Closing this issue.
In response to this:
Project is to be archived - closing all issues and PRs
see https://github.com/kubernetes/org/issues/4122 and https://groups.google.com/d/msgid/kubernetes-sig-multicluster/9f8d81d1-07d1-4985-a7bf-d76197deb971n%40googlegroups.com
for details
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.