postee icon indicating copy to clipboard operation
postee copied to clipboard

StatefullSet error in GCP Kubernetes manifest

Open krol3 opened this issue 3 years ago • 4 comments

Description

on GKE im getting " Multi-Attach error for volume" Solution: use node affinity so that statefulset and deployment land on same node.

we need to review the manifest to avoid these errors.

krol3 avatar Jun 26 '22 23:06 krol3

the same here but not with GCP, fixed by changing https://github.com/aquasecurity/postee/blob/main/deploy/helm/postee/values.yaml#L254 to ReadWriteMany. @simar7 why postee is needed to be sts, not deployment, is there any reason?

grglzrv avatar Sep 25 '22 06:09 grglzrv

I am also getting multi-attach error on Digital Ocean here are the events from the UI pod:

Events:
  Type     Reason              Age                    From                     Message
  ----     ------              ----                   ----                     -------
  Warning  FailedScheduling    6m48s                  default-scheduler        0/3 nodes are available: 3 persistentvolumeclaim "app-postee-db-app-postee-0" not found. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.
  Warning  FailedScheduling    6m46s                  default-scheduler        0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.
  Warning  FailedScheduling    6m44s                  default-scheduler        0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.
  Normal   Scheduled           6m40s                  default-scheduler        Successfully assigned postee/app-posteeui-57594694cb-7zv88 to pool-e49pvo8ea-75ssj
  Warning  FailedAttachVolume  6m30s                  attachdetach-controller  Multi-Attach error for volume "pvc-d6b1ea49-f949-4f86-8fc1-25f7e7bed317" Volume is already used by pod(s) app-postee-0
  Warning  FailedAttachVolume  6m25s                  attachdetach-controller  Multi-Attach error for volume "pvc-c4a9e5c5-199f-4cc1-b7f0-5dc62cef1f2c" Volume is already used by pod(s) app-postee-0
  Warning  FailedMount         2m23s (x2 over 4m37s)  kubelet                  Unable to attach or mount volumes: unmounted volumes=[postee-config postee-db], unattached volumes=[postee-config kube-api-access-f9zj6 postee-db]: timed out waiting for the condition
  Warning  FailedMount         8s                     kubelet                  Unable to attach or mount volumes: unmounted volumes=[postee-db postee-config], unattached volumes=[kube-api-access-f9zj6 postee-db postee-config]: timed out waiting for the condition

AnaisUrlichs avatar Oct 10 '22 14:10 AnaisUrlichs

@grglzrv the workaround/change does not seem to work for me, the posttee-ui pod is now starting but the postee pod cannot start and has these errors in the logs

│ stream logs failed container "setting-db" in pod "app-postee-0" is waiting to start: PodInitializing for postee/ap │
│ stream logs failed container "setting-cfg" in pod "app-postee-0" is waiting to start: PodInitializing for postee/a │
│ stream logs failed container "postee" in pod "app-postee-0" is waiting to start: PodInitializing for postee/app-po │

AnaisUrlichs avatar Oct 10 '22 14:10 AnaisUrlichs

Facing a similar issue in AWS EKS. Unable to start any pod.

Defaulted container "postee" out of: postee, setting-db (init), setting-cfg (init)
Error from server (BadRequest): container "postee" in pod "postee-0" is waiting to start: PodInitializing

kmganna avatar Mar 20 '23 01:03 kmganna