karmada icon indicating copy to clipboard operation
karmada copied to clipboard

Status of PVC out of sync

Open lts0609 opened this issue 2 years ago • 6 comments

What happened: I created a multi cluster helm application,but the pvc status out of sync

  1. create helm application on the karmada controller plane by argocd
  2. synchronize resources
  3. apply propagation
[root@node1 ~]# kubectl get pvc --kubeconfig /etc/karmada/karmada-apiserver.config
NAME      STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
grafana   Pending                                                     106s
[root@node1 ~]# kubectl describe pvc grafana --kubeconfig /etc/karmada/karmada-apiserver.config
Name:          grafana
Namespace:     
StorageClass:
Status:        Pending
Events:
  Type     Reason                  Age                    From                  Message
  ----     ------                  ----                   ----                  -------
  Warning  ApplyPolicyFailed       2m33s                  resource-detector     No policy match for resource
  Normal   ApplyPolicySucceed      2m12s                  resource-detector     Apply policy(lts/grafana) succeed
  Normal   SyncSucceed             2m12s                  execution-controller  Successfully applied resource(lts/grafana) to cluster host
  Normal   SyncSucceed             2m12s                  execution-controller  Successfully applied resource(lts/grafana) to cluster member
  Normal   SyncWorkSucceed         2m11s (x6 over 2m12s)  binding-controller    Sync work of resourceBinding(lts/grafana-persistentvolumeclaim) successful.
  Normal   AggregateStatusSucceed  2m11s (x6 over 2m12s)  binding-controller    Update resourceBinding(lts/grafana-persistentvolumeclaim) with AggregatedStatus successfully.
  Normal   ScheduleBindingSucceed  2m11s (x3 over 2m12s)  karmada-scheduler     Binding has been scheduled

Environment:

  • Karmada version: 1.3.0
  • Kubernetes version: 1.22.9
  • Others: helm application

lts0609 avatar Sep 28 '22 01:09 lts0609

@Poor12 Please help to reproduce and investigate it with v1.3.0.

RainbowMango avatar Sep 28 '22 01:09 RainbowMango

I found that this can make the pvc state normal

  1. Delete pvc in the karmada controller plane
  2. Synchronize again resource by argocd
[root@node1 ~]# kubectl get pvc --kubeconfig /etc/karmada/karmada-apiserver.config
NAME      STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
grafana   Bound                                                      16s

The warning should be because I did not manually delete the PVC of the member cluster, After the new pvc is successfully created, the application can be accessed normally

Events:
  Type     Reason                  Age                 From                  Message
  ----     ------                  ----                ----                  -------
  Normal   SyncWorkSucceed         11m (x5 over 11m)   binding-controller    Sync work of resourceBinding(lts/grafana-persistentvolumeclaim) successful.
  Normal   ApplyPolicySucceed      11m                 resource-detector     Apply policy(lts/grafana) succeed
  Normal   AggregateStatusSucceed  11m (x5 over 11m)   binding-controller    Update resourceBinding(lts/grafana-persistentvolumeclaim) with AggregatedStatus successfully.
  Normal   ScheduleBindingSucceed  11m (x8 over 11m)   karmada-scheduler     Binding has been scheduled
  Warning  SyncFailed              11m (x12 over 11m)  execution-controller  Failed to create resource(lts/grafana) in member cluster(member): PersistentVolumeClaim "grafana" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
    ... // 2 identical fields
    Resources:        {Requests: {s"storage": {i: {...}, s: "10Gi", Format: "BinarySI"}}},
    VolumeName:       "pvc-645fa3e9-9e99-416d-a2ee-7305361300ac",
-   StorageClassName: nil,
+   StorageClassName: &"local",
    VolumeMode:       &"Filesystem",
    DataSource:       nil,
  }
  Warning  SyncFailed  30s (x18 over 11m)  execution-controller  Failed to create resource(lts/grafana) in member cluster(host): PersistentVolumeClaim "grafana" is invalid: spec: Forbidden: spec is immutable after creation except resources.requests for bound claims
  core.PersistentVolumeClaimSpec{
    ... // 2 identical fields
    Resources:        {Requests: {s"storage": {i: {...}, s: "10Gi", Format: "BinarySI"}}},
    VolumeName:       "pvc-2cebe437-4469-4a6e-9002-856810e228c5",
-   StorageClassName: nil,
+   StorageClassName: &"local",
    VolumeMode:       &"Filesystem",
    DataSource:       nil,
    DataSourceRef:    nil,
  }

Whether it is need to have a propagation applied before creating,the pvc status can be synchronized?

lts0609 avatar Sep 28 '22 01:09 lts0609

I guess PVC is not applied before. You can use kubectl --kubeconfig ~/.kube/karmada.config --context karmada-apiserver get work -A to check whether resources are applied successfully to member clusters.

Poor12 avatar Sep 28 '22 02:09 Poor12

I guess PVC is not applied before. You can use kubectl --kubeconfig ~/.kube/karmada.config --context karmada-apiserver get work -A to check whether resources are applied successfully to member clusters.

All resources have been successfully created in the member cluster. The application access is normal. I view the work in karmada control plane, found that only pvc work not applied, Then I tried many times, Including advance apply propagation,but only a few time pvc synchronized and applied successfully, most of the time, pvc in karmada is in the pending status. What causes this?

lts0609 avatar Sep 28 '22 02:09 lts0609

The warning should be because I did not manually delete the PVC of the member cluster, After the new pvc is successfully created, the application can be accessed normally

I would like to ask if this PVC has not been distributed through karmada before and has already been in the member cluster.

Poor12 avatar Sep 29 '22 01:09 Poor12

I would like to ask if this PVC has not been distributed through karmada before and has already been in the member cluster.

I think that's because I directly deleted the PVC in the Karmada plane and resynchronize in argocd, the PVC in the member cluster is still in Terminating status,It causes the warning .

lts0609 avatar Sep 29 '22 02:09 lts0609

It seems the problem has been fixed. /close

XiShanYongYe-Chang avatar Mar 04 '24 02:03 XiShanYongYe-Chang

@XiShanYongYe-Chang: Closing this issue.

In response to this:

It seems the problem has been fixed. /close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

karmada-bot avatar Mar 04 '24 02:03 karmada-bot