cloud-provider-openstack
cloud-provider-openstack copied to clipboard
[manila-csi-plugin]resizer container failed to startup
Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug
What happened: We are using K8s 1.23 (Rancher RKE bootstrap) in Openstack. the PV are using Manila. I'm try to upgrade manila driver and found new version added resizer container. But when i add it in the sts, it error with below log:
kubectl logs -n kube-system openstack-manila-csi-controllerplugin-0 -c resizer
I0921 01:56:27.573780 1 main.go:93] Version : v1.3.0
I0921 01:56:27.576481 1 common.go:111] Probing CSI driver for readiness
F0921 01:56:27.581237 1 main.go:161] CSI driver neither supports controller resize nor node resize
goroutine 1 [running]:
k8s.io/klog/v2.stacks(0xc000138001, 0xc00027e000, 0x69, 0xa0)
/workspace/vendor/k8s.io/klog/v2/klog.go:1026 +0xb9
k8s.io/klog/v2.(*loggingT).output(0x2829f00, 0xc000000003, 0x0, 0x0, 0xc000273c70, 0x1, 0x20af014, 0x7, 0xa1, 0x40e000)
/workspace/vendor/k8s.io/klog/v2/klog.go:975 +0x1e5
k8s.io/klog/v2.(*loggingT).printDepth(0x2829f00, 0xc000000003, 0x0, 0x0, 0x0, 0x0, 0x1, 0xc0001f2b70, 0x1, 0x1)
/workspace/vendor/k8s.io/klog/v2/klog.go:735 +0x185
k8s.io/klog/v2.(*loggingT).print(...)
/workspace/vendor/k8s.io/klog/v2/klog.go:717
k8s.io/klog/v2.Fatal(...)
/workspace/vendor/k8s.io/klog/v2/klog.go:1494
main.main()
/workspace/cmd/csi-resizer/main.go:161 +0x127a
goroutine 5 [chan receive]:
k8s.io/klog/v2.(*loggingT).flushDaemon(0x2829f00)
/workspace/vendor/k8s.io/klog/v2/klog.go:1169 +0x8b
created by k8s.io/klog/v2.init.0
/workspace/vendor/k8s.io/klog/v2/klog.go:420 +0xdf
goroutine 16 [select]:
google.golang.org/grpc.(*ccBalancerWrapper).watcher(0xc0000b97c0)
/workspace/vendor/google.golang.org/grpc/balancer_conn_wrappers.go:69 +0xac
created by google.golang.org/grpc.newCCBalancerWrapper
/workspace/vendor/google.golang.org/grpc/balancer_conn_wrappers.go:60 +0x172
goroutine 65 [chan receive]:
google.golang.org/grpc.(*addrConn).resetTransport(0xc0003eb600)
/workspace/vendor/google.golang.org/grpc/clientconn.go:1214 +0x465
created by google.golang.org/grpc.(*addrConn).connect
/workspace/vendor/google.golang.org/grpc/clientconn.go:844 +0x12a
goroutine 66 [IO wait]:
internal/poll.runtime_pollWait(0x7f4ac93ae7d8, 0x72, 0xffffffffffffffff)
/usr/local/go/src/runtime/netpoll.go:222 +0x55
internal/poll.(*pollDesc).wait(0xc000197298, 0x72, 0x8000, 0x8000, 0xffffffffffffffff)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:87 +0x45
internal/poll.(*pollDesc).waitRead(...)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).Read(0xc000197280, 0xc000244000, 0x8000, 0x8000, 0x0, 0x0, 0x0)
/usr/local/go/src/internal/poll/fd_unix.go:166 +0x1d5
net.(*netFD).Read(0xc000197280, 0xc000244000, 0x8000, 0x8000, 0xc00052e898, 0xc000235c10, 0x8f4a82)
/usr/local/go/src/net/fd_posix.go:55 +0x4f
net.(*conn).Read(0xc000138ab8, 0xc000244000, 0x8000, 0x8000, 0x0, 0x0, 0x0)
/usr/local/go/src/net/net.go:183 +0x91
bufio.(*Reader).Read(0xc00046a2a0, 0xc0003c23b8, 0x9, 0x9, 0x5000000000000, 0x100235c98, 0x0)
/usr/local/go/src/bufio/bufio.go:227 +0x222
io.ReadAtLeast(0x1ccd5a0, 0xc00046a2a0, 0xc0003c23b8, 0x9, 0x9, 0x9, 0xc00004ac48, 0x0, 0x0)
/usr/local/go/src/io/io.go:328 +0x87
io.ReadFull(...)
/usr/local/go/src/io/io.go:347
golang.org/x/net/http2.readFrameHeader(0xc0003c23b8, 0x9, 0x9, 0x1ccd5a0, 0xc00046a2a0, 0x0, 0x0, 0x0, 0x0)
/workspace/vendor/golang.org/x/net/http2/frame.go:237 +0x89
golang.org/x/net/http2.(*Framer).ReadFrame(0xc0003c2380, 0xc0001f0b40, 0xc0001f0b40, 0x0, 0x0)
/workspace/vendor/golang.org/x/net/http2/frame.go:492 +0xa5
google.golang.org/grpc/internal/transport.(*http2Client).reader(0xc00000a3c0)
/workspace/vendor/google.golang.org/grpc/internal/transport/http2_client.go:1347 +0x1a5
created by google.golang.org/grpc/internal/transport.newHTTP2Client
/workspace/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0xdd1
goroutine 67 [runnable]:
runtime.Gosched(...)
/usr/local/go/src/runtime/proc.go:292
google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc00046a540, 0x0, 0x0)
/workspace/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:563 +0x1af
google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc00000a3c0)
/workspace/vendor/google.golang.org/grpc/internal/transport/http2_client.go:396 +0x7b
created by google.golang.org/grpc/internal/transport.newHTTP2Client
/workspace/vendor/google.golang.org/grpc/internal/transport/http2_client.go:394 +0x12ae
But if i remove the container, all works fine:
kubectl get po -n kube-system |grep manila openstack-manila-csi-controllerplugin-0 3/3 Running 0 17s openstack-manila-csi-nodeplugin-6c6nh 2/2 Running 0 45m openstack-manila-csi-nodeplugin-dtzk4 2/2 Running 0 44m openstack-manila-csi-nodeplugin-kns85 2/2 Running 0 44m openstack-manila-csi-nodeplugin-q4hbh 2/2 Running 0 44m openstack-manila-csi-nodeplugin-t67qc 2/2 Running 0 44m openstack-manila-csi-nodeplugin-xd8wb 2/2 Running 0 44m
wondering if i missing something? Thanks for your time.
kubectl logs -n kube-system openstack-manila-csi-controllerplugin-0 -c resizer
looks like you are running on resizing, wondering whether the issue is for https://github.com/kubernetes-csi/external-resizer/ ?
Hi @lw8008, can you please let us know what's the version of manila-csi you're deploying?
Hi @gman0 i've upgraded manila-csi and resizer is up and running now. but i met below error when test the expand:
kubectl logs -n kube-system openstack-manila-csi-controllerplugin-0 -f --tail=10 -c resizer
...
E1025 06:17:37.218708 1 controller.go:282] Error syncing PVC: resize volume "pvc-2c8016a3-dfae-444e-9557-2d41c1cb49fe" by resizer "manila.csi.openstack.org" failed: rpc error: code = InvalidArgument desc = volume expand secrets cannot be nil or empty
I1025 06:17:37.218755 1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"testabc", UID:"2c8016a3-dfae-444e-9557-2d41c1cb49fe", APIVersion:"v1", ResourceVersion:"367673033", FieldPath:""}): type: 'Warning' reason: 'VolumeResizeFailed' resize volume "pvc-2c8016a3-dfae-444e-9557-2d41c1cb49fe" by resizer "manila.csi.openstack.org" failed: rpc error: code = InvalidArgument desc = volume expand secrets cannot be nil or empty
But ii've already added below in SC and bounce manila pod:
csi.storage.k8s.io/controller-expand-secret-name: csi-manila-secrets
csi.storage.k8s.io/controller-expand-secret-namespace: kube-system
the content of SC as below:
cat b.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
storageclass.kubernetes.io/is-default-class: "true"
name: default
provisioner: manila.csi.openstack.org
reclaimPolicy: Retain
allowVolumeExpansion: true
volumeBindingMode: Immediate
parameters:
type: default
shareNetworkID: f81cf32e
csi.storage.k8s.io/node-publish-secret-name: csi-manila-secrets
csi.storage.k8s.io/node-publish-secret-namespace: kube-system
csi.storage.k8s.io/node-stage-secret-name: csi-manila-secrets
csi.storage.k8s.io/node-stage-secret-namespace: kube-system
csi.storage.k8s.io/provisioner-secret-name: csi-manila-secrets
csi.storage.k8s.io/provisioner-secret-namespace: kube-system
csi.storage.k8s.io/controller-expand-secret-name: csi-manila-secrets
csi.storage.k8s.io/controller-expand-secret-namespace: kube-system
mountOptions:
- hard
- nfsvers=3
- rsize=32768
- wsize=32768
wondering if I missing anything?
@lw8008 does the resizer complain about not being able to retrieve the secret? Can you send bigger chunk of the logs?
@lw8008
But ii've already added below in SC and bounce manila pod:
If you haven't already done so, can you please recreate the PVC too? Recreating the storage class is probably not enough (and restarting the Pod has no effect on this, FYI). I don't recall now exactly how the resizer retrieves the secret ref, but I think it might be getting it from the PV directly, not from the storage class.
@gman0 it works with new created PVC, thanks for your guide.