external-resizer icon indicating copy to clipboard operation
external-resizer copied to clipboard

openebs(LVM model) use csi-resizer to expand pv failed

Open karony opened this issue 2 years ago • 1 comments

karony avatar Sep 21 '22 07:09 karony

Backgroud: I expand pv size for a statefulset with 6 pods, but 3 pod pv expand size fail. I make sure I have enough disk.

log and pod info

[kube-prod01@ip-10-128-155-67 ~]$ kubectl describe pvc elasticsearch-data-uat-es-data-4 -n di-elk-spsec-hids-data 
Name:          elasticsearch-data-uat-es-data-4
Namespace:     di-elk-spsec-hids-data
StorageClass:  localpv-lvm
Status:        Bound
Volume:        pvc-99c528da-a213-45b7-87bb-0d89978fa5d4
Labels:        common.k8s.elastic.co/type=elasticsearch
               elasticsearch.k8s.elastic.co/cluster-name=spsec-hids-data-uat
               elasticsearch.k8s.elastic.co/statefulset-name=spsec-hids-data-uat-es-data
Annotations:   pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-provisioner: local.csi.openebs.io
               volume.kubernetes.io/selected-node: ip-10-169-3-154
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      1025Gi
Access Modes:  RWO
VolumeMode:    Filesystem
Used By:       spsec-hids-data-uat-es-data-4
Events:
  Type     Reason                  Age                  From                                   Message
  ----     ------                  ----                 ----                                   -------
  Warning  ExternalExpanding       87s (x2 over 5h15m)  volume_expand                          Ignoring the PVC: didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.
  Warning  VolumeResizeFailed      84s (x2 over 5h15m)  external-resizer local.csi.openebs.io  Mark PVC "di-elk-spsec-hids-data/elasticsearch-data-uat-es-data-4" as file system resize required failed: can't patch status of  PVC di-elk-spsec-hids-data/elasticsearch-data-uat-es-data-4 with Operation cannot be fulfilled on persistentvolumeclaims "elasticsearch-data-uat-es-data-4": the object has been modified; please apply your changes to the latest version and try again
  Normal   Resizing                83s (x4 over 5h15m)  external-resizer local.csi.openebs.io  External resizer is resizing volume pvc-99c528da-a213-45b7-87bb-0d89978fa5d4
  Normal   VolumeResizeSuccessful  81s (x2 over 5h15m)  external-resizer local.csi.openebs.io  Resize volume succeeded
"di-elk-spsec-hids-data/elasticsearch-data-uat-es-data-4"
I0915 02:34:05.114673       1 controller.go:291] Started PVC processing "di-elk-spsec-hids-data/elasticsearch-data-uat-es-data-4"
I0915 02:34:05.883143       1 request.go:600] Waited for 768.003478ms due to client-side throttling, not priority and fairness, request: PATCH:[https://172.16.0.1:443/api/v1/namespaces/di-elk-spsec-hids-data/persistentvolumeclaims/elasticsearch-data-uat-es-data-4/status](https://172.16.0.1/api/v1/namespaces/di-elk-spsec-hids-data/persistentvolumeclaims/elasticsearch-data-uat-es-data-4/status)
I0915 02:34:05.888246       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"di-elk-spsec-hids-data", Name:"elasticsearch-data-uat-es-data-4", UID:"99c528da-a213-45b7-87bb-0d89978fa5d4", APIVersion:"v1", ResourceVersion:"285193539", FieldPath:""}): type: 'Normal' reason: 'Resizing' External resizer is resizing volume pvc-99c528da-a213-45b7-87bb-0d89978fa5d4
I0915 02:34:07.683416       1 request.go:600] Waited for 792.792301ms due to client-side throttling, not priority and fairness, request: PATCH:[https://172.16.0.1:443/api/v1/namespaces/di-elk-spsec-hids-data/persistentvolumeclaims/elasticsearch-data-uat-es-data-4/status](https://172.16.0.1/api/v1/namespaces/di-elk-spsec-hids-data/persistentvolumeclaims/elasticsearch-data-uat-es-data-4/status)
E0915 02:34:07.687589       1 controller.go:282] Error syncing PVC: Mark PVC "di-elk-spsec-hids-data/elasticsearch-data-uat-es-data-4" as file system resize required failed: can't patch status of  PVC di-elk-spsec-hids-data/elasticsearch-data-uat-es-data-4 with Operation cannot be fulfilled on persistentvolumeclaims "elasticsearch-data-uat-es-data-4": the object has been modified; please apply your changes to the latest version and try again
I0915 02:34:07.687623       1 controller.go:291] Started PVC processing "di-elk-spsec-hids-data/elasticsearch-data-uat-es-data-4"
I0915 02:34:07.687654       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"di-elk-spsec-hids-data", Name:"elasticsearch-data-uat-es-data-4", UID:"99c528da-a213-45b7-87bb-0d89978fa5d4", APIVersion:"v1", ResourceVersion:"285193539", FieldPath:""}): type: 'Warning' reason: 'VolumeResizeFailed' Mark PVC "di-elk-spsec-hids-data/elasticsearch-data-uat-es-data-4" as file system resize required failed: can't patch status of  PVC di-elk-spsec-hids-data/elasticsearch-data-uat-es-data-4 with Operation cannot be fulfilled on persistentvolumeclaims "elasticsearch-data-uat-es-data-4": the object has been modified; please apply your changes to the latest version and try again
I0915 02:34:08.283740       1 request.go:600] Waited for 595.550153ms due to client-side throttling, not priority and fairness, request: PATCH:[https://172.16.0.1:443/api/v1/namespaces/di-elk-spsec-hids-data/persistentvolumeclaims/elasticsearch-data-uat-es-data-4/status](https://172.16.0.1/api/v1/namespaces/di-elk-spsec-hids-data/persistentvolumeclaims/elasticsearch-data-uat-es-data-4/status)
I0915 02:34:08.290961       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"di-elk-spsec-hids-data", Name:"elasticsearch-data-uat-es-data-4", UID:"99c528da-a213-45b7-87bb-0d89978fa5d4", APIVersion:"v1", ResourceVersion:"285193644", FieldPath:""}): type: 'Normal' reason: 'Resizing' External resizer is resizing volume pvc-99c528da-a213-45b7-87bb-0d89978fa5d4
I0915 02:34:09.683940       1 request.go:600] Waited for 794.616164ms due to client-side throttling, not priority and fairness, request: PATCH:[https://172.16.0.1:443/api/v1/namespaces/di-elk-spsec-hids-data/persistentvolumeclaims/elasticsearch-data-uat-es-data-4/status](https://172.16.0.1/api/v1/namespaces/di-elk-spsec-hids-data/persistentvolumeclaims/elasticsearch-data-uat-es-data-4/status)
I0915 02:34:09.689026       1 controller.go:533] Resize PVC "di-elk-spsec-hids-data/elasticsearch-data-uat-es-data-4" finished
I0915 02:34:09.689071       1 controller.go:291] Started PVC processing "di-elk-spsec-hids-data/elasticsearch-data-uat-es-data-4"
I0915 02:34:09.689081       1 controller.go:334] No need to resize PVC "di-elk-spsec-hids-data/elasticsearch-data-uat-es-data-4"
I0915 02:34:09.689093       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"di-elk-spsec-hids-data", Name:"elasticsearchdata-uat-es-data-4", UID:"99c528da-a213-45b7-87bb-0d89978fa5d4", APIVersion:"v1", ResourceVersion:"285193644", FieldPath:""}): type: 'Normal' reason: 'VolumeResizeSuccessful' Resize volume succeeded
I0915 02:34:09.689378       1 controller.go:291] Started PVC processing "di-elk-spsec-hids-data/elasticsearch-data-uat-es-data-4"

I don't know the reason for this error Error syncing PVC: Mark PVC "di-elk-spsec-hids-data/elasticsearch-data-uat-es-data-4" as file system resize required failed: can't patch status of PVC di-elk-spsec-hids-data/elasticsearch-data-uat-es-data-4 with Operation cannot be fulfilled on persistentvolumeclaims "elasticsearch-data-uat-es-data-4": the object has been modified; please apply your changes to the latest version and try again

Can you give me some hint from the message.

Version info kubernetes: 1.22.5 csi-resizer: 1.2

very thank you!

karony avatar Sep 21 '22 07:09 karony

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Dec 20 '22 08:12 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Jan 19 '23 08:01 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Feb 18 '23 09:02 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Feb 18 '23 09:02 k8s-ci-robot