containerized-data-importer
containerized-data-importer copied to clipboard
virt-launcher pod and pv are on different nodes
Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug
What happened: We use WaitForFirstConsumer hostpath-provisioner creating pvc/pv for kubevirt vms. However,there're cases the virt-laucher pod and pv pod are on different nodes,which cause the vmi could not start successfully. We create kubevirt vms which all have node selector simultaneously. Some have the problem,others created successfully. What you expected to happen: virt-lancher pod and pv are on the same node
Anything else we need to know?: After delete the error vmi, the automatically new vmi create successfully,the virt-launcher pod and pv are both on the target node. I notice the cdi-deployment log for the error vmi and normal vmi are different.
The normal vmi
I0317 04:55:35.663158 1 util.go:317] GetFilesystemOverhead with PVC&PersistentVolumeClaim{ObjectMeta:{cve-netnode01-system-disk-0 cie /api/v1/namespaces/cie/persistentvolumeclaims/cve-netnode01-system-disk-0 f6774605-d199-47bd-9516-fd5a14c127a1 492224 0 2022-03-17 04:54:40 +0000 UTC <nil> <nil> map[app:containerized-data-importer] map[cdi.kubevirt.io/storage.contentType:kubevirt cdi.kubevirt.io/storage.import.endpoint:http://192.168.62.100:18080/system.qcow2 cdi.kubevirt.io/storage.import.importPodName:importer-cve-netnode01-system-disk-0 cdi.kubevirt.io/storage.import.source:http cdi.kubevirt.io/storage.pod.restarts:0 cdi.kubevirt.io/storage.preallocation.requested:false pv.kubernetes.io/bind-completed:yes pv.kubernetes.io/bound-by-controller:yes volume.beta.kubernetes.io/storage-provisioner:kubevirt.io/hostpath-provisioner volume.kubernetes.io/selected-node:hci-001] [{cdi.kubevirt.io/v1beta1 DataVolume cve-netnode01-system-disk-0 3febca31-75db-47f3-b803-44195dfabf66 0xc001b27f9a 0xc001b27f9b}] [kubernetes.io/pvc-protection] []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{42949672960 0} {<nil>} BinarySI},},},VolumeName:pvc-f6774605-d199-47bd-9516-fd5a14c127a1,Selector:nil,StorageClassName:*hostpath-sas,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Bound,AccessModes:[ReadWriteOnce],Capacity:ResourceList{storage: {{42949672960 0} {<nil>} BinarySI},},Conditions:[]PersistentVolumeClaimCondition{},},}
The error vmi
I0317 04:54:35.871073 1 util.go:317] GetFilesystemOverhead with PVC&PersistentVolumeClaim{ObjectMeta:{eip-cluster01-data-disk-60g-1 cie /api/v1/namespaces/cie/persistentvolumeclaims/eip-cluster01-data-disk-60g-1 67662900-538d-4db5-9dee-5dc03fdf7a97 490007 0 2022-03-17 04:54:35 +0000 UTC <nil> <nil> map[app:containerized-data-importer] map[cdi.kubevirt.io/storage.contentType:kubevirt cdi.kubevirt.io/storage.import.importPodName:importer-eip-cluster01-data-disk-60g-1 cdi.kubevirt.io/storage.import.source:none cdi.kubevirt.io/storage.pod.restarts:0 cdi.kubevirt.io/storage.preallocation.requested:false] [{cdi.kubevirt.io/v1beta1 DataVolume eip-cluster01-data-disk-60g-1 53067044-e20b-45bc-bb18-d8f312490829 0xc001c47d87 0xc001c47d88}] [kubernetes.io/pvc-protection] []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{64424509440 0} {<nil>} 60Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*hostpath-sas,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}
the error vmi pvc has no volume.kubernetes.io/selected-node
while the nomal pvc has
Environment:
- CDI version (use
kubectl get deployments cdi-deployment -o yaml
): - v1.32.0 kubevirt version:0.41
- Kubernetes version (use
kubectl version
): - v1.20.13
Thanks for reporting.
I have noticed one thing. The error log that you show is the log for PersistentVolumeClaim, and it shows a PVC that is not yet bound. PVC gets the annotation selected-node
after it is successfully bound.
It would be be better to provide the kubectl describe
of DataVolume/PersistentVolumeClaim/PersistentVolume when you see the problem. It is hard to know what is going on without seeing the objects and events.
It might be a problem with provisioner. But it also might be a well known limitation of of the way CDI and Kubevirt handles WFFC. If yes, then it is a duplicate of https://github.com/kubevirt/containerized-data-importer/issues/2033.
Please check the issue here: https://github.com/kubevirt/kubevirt/issues/6531. If you have any ideas that would help or some real-life scenarios please join the discussion there.
Thanks for you quick reply. Here's the pvc/pv/dv info
this is the simultaneous PVC
this is the simultaneours dv
dv
root@hci-001:~# kubectl get dv -n cie eip-cluster01-data-disk-60g-1 -o yaml
apiVersion: cdi.kubevirt.io/v1beta1
kind: DataVolume
metadata:
creationTimestamp: "2022-03-17T04:54:29Z"
generation: 6
labels:
kubevirt.io/created-by: a2dfebc6-4e28-49a1-aa4d-aa1eda3c0583
name: eip-cluster01-data-disk-60g-1
namespace: cie
ownerReferences:
- apiVersion: kubevirt.io/v1
blockOwnerDeletion: true
controller: true
kind: VirtualMachine
name: eip-cluster01
uid: a2dfebc6-4e28-49a1-aa4d-aa1eda3c0583
resourceVersion: "491978"
selfLink: /apis/cdi.kubevirt.io/v1beta1/namespaces/cie/datavolumes/eip-cluster01-data-disk-60g-1
uid: 53067044-e20b-45bc-bb18-d8f312490829
spec:
pvc:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 60Gi
storageClassName: hostpath-sas
source:
blank: {}
status:
conditions:
- lastHeartbeatTime: "2022-03-17T04:54:51Z"
lastTransitionTime: "2022-03-17T04:54:51Z"
message: PVC eip-cluster01-data-disk-60g-1 Bound
reason: Bound
status: "True"
type: Bound
- lastHeartbeatTime: "2022-03-17T04:55:27Z"
lastTransitionTime: "2022-03-17T04:55:27Z"
status: "True"
type: Ready
- lastHeartbeatTime: "2022-03-17T04:55:27Z"
lastTransitionTime: "2022-03-17T04:55:27Z"
message: Import Complete
reason: Completed
status: "False"
type: Running
phase: Succeeded
progress: 100.0%
pvc
root@hci-001:~# kubectl get pvc -n cie eip-cluster01-data-disk-60g-1 -o yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
cdi.kubevirt.io/storage.condition.running: "false"
cdi.kubevirt.io/storage.condition.running.message: Import Complete
cdi.kubevirt.io/storage.condition.running.reason: Completed
cdi.kubevirt.io/storage.contentType: kubevirt
cdi.kubevirt.io/storage.import.importPodName: importer-eip-cluster01-data-disk-60g-1
cdi.kubevirt.io/storage.import.source: none
cdi.kubevirt.io/storage.pod.phase: Succeeded
cdi.kubevirt.io/storage.pod.restarts: "0"
cdi.kubevirt.io/storage.preallocation.requested: "false"
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
volume.beta.kubernetes.io/storage-provisioner: kubevirt.io/hostpath-provisioner
volume.kubernetes.io/selected-node: hci-002
creationTimestamp: "2022-03-17T04:54:35Z"
finalizers:
- kubernetes.io/pvc-protection
labels:
app: containerized-data-importer
name: eip-cluster01-data-disk-60g-1
namespace: cie
ownerReferences:
- apiVersion: cdi.kubevirt.io/v1beta1
blockOwnerDeletion: true
controller: true
kind: DataVolume
name: eip-cluster01-data-disk-60g-1
uid: 53067044-e20b-45bc-bb18-d8f312490829
resourceVersion: "490988"
selfLink: /api/v1/namespaces/cie/persistentvolumeclaims/eip-cluster01-data-disk-60g-1
uid: 67662900-538d-4db5-9dee-5dc03fdf7a97
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 60Gi
storageClassName: hostpath-sas
volumeMode: Filesystem
volumeName: pvc-67662900-538d-4db5-9dee-5dc03fdf7a97
status:
accessModes:
- ReadWriteOnce
capacity:
storage: 60Gi
phase: Bound
pv
root@hci-001:~# kubectl get pv -n cie pvc-67662900-538d-4db5-9dee-5dc03fdf7a97 -o yaml
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
hostPathProvisionerIdentity: kubevirt.io/hostpath-provisioner
kubevirt.io/provisionOnNode: hci-002
pv.kubernetes.io/provisioned-by: kubevirt.io/hostpath-provisioner
creationTimestamp: "2022-03-17T04:54:35Z"
finalizers:
- kubernetes.io/pv-protection
name: pvc-67662900-538d-4db5-9dee-5dc03fdf7a97
resourceVersion: "490066"
selfLink: /api/v1/persistentvolumes/pvc-67662900-538d-4db5-9dee-5dc03fdf7a97
uid: 1f47e4b6-f08a-4262-be41-cdb495797f5e
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 60Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: eip-cluster01-data-disk-60g-1
namespace: cie
resourceVersion: "490040"
uid: 67662900-538d-4db5-9dee-5dc03fdf7a97
hostPath:
path: /data/hpvolumesdata/pvc-67662900-538d-4db5-9dee-5dc03fdf7a97
type: ""
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- hci-002
persistentVolumeReclaimPolicy: Delete
storageClassName: hostpath-sas
volumeMode: Filesystem
status:
phase: Bound
And this simultaneous pv/pod/vmi
thanks,
it would be better to show kubectl describe
instead of kubectl get
. To see if there are any interesting events recorded for the objects. for DV/PVC/PV and VM.
The storageclass hostpath-sas has the WaitForFirstConsumer binding mode?
Yes,the sc use WaitForFirstConsumer binding mode The vm has been recreated,could not describe now
And I have detail log that maybe useful
root@hci-001:~# kubectl logs -n cdi cdi-deployment-59d6c76b5-r5wsq
I0317 04:54:28.607264 1 controller.go:89] Note: increase the -v level in the controller deployment for more detailed logging, eg. -v=2 or -v=3
{"level":"info","ts":1647492868.6075983,"logger":"main","msg":"Verbosity level","verbose":"1","debug":false}
W0317 04:54:28.607654 1 client_config.go:552] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I0317 04:54:28.636759 1 leaderelection.go:51] Attempting to acquire leader lease
I0317 04:54:28.637023 1 leaderelection.go:242] attempting to acquire leader lease cdi/cdi-controller-leader-election-helper...
I0317 04:54:28.644333 1 leaderelection.go:252] successfully acquired lease cdi/cdi-controller-leader-election-helper
I0317 04:54:28.644497 1 leaderelection.go:105] Successfully acquired leadership lease
I0317 04:54:28.644530 1 controller.go:104] Starting CDI controller components
I0317 04:54:29.696727 1 request.go:621] Throttling request took 1.04564427s, request: GET:https://100.105.0.1:443/apis/discovery.k8s.io/v1beta1?timeout=32s
{"level":"info","ts":1647492869.9038038,"logger":"controller-runtime.metrics","msg":"metrics server is starting to listen","addr":":8080"}
{"level":"info","ts":1647492875.4463408,"logger":"controller","msg":"Initialized CDI Config object"}
I0317 04:54:35.447488 1 controller.go:171] created cdi controllers
{"level":"info","ts":1647492875.548153,"logger":"controller-runtime.manager","msg":"starting metrics server","path":"/metrics"}
{"level":"info","ts":1647492875.5483048,"logger":"controller","msg":"Starting EventSource","controller":"transfer-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1647492875.5483367,"logger":"controller","msg":"Starting EventSource","controller":"import-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1647492875.5484707,"logger":"controller","msg":"Starting EventSource","controller":"transfer-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1647492875.5484643,"logger":"controller","msg":"Starting EventSource","controller":"datavolume-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1647492875.5485146,"logger":"controller","msg":"Starting EventSource","controller":"config-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1647492875.5483618,"logger":"controller","msg":"Starting EventSource","controller":"clone-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1647492875.5485718,"logger":"controller","msg":"Starting EventSource","controller":"upload-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1647492875.648957,"logger":"controller","msg":"Starting EventSource","controller":"import-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1647492875.6490881,"logger":"controller","msg":"Starting EventSource","controller":"clone-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1647492875.649154,"logger":"controller","msg":"Starting EventSource","controller":"upload-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1647492875.6491656,"logger":"controller","msg":"Starting EventSource","controller":"config-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1647492875.6500528,"logger":"controller","msg":"Starting EventSource","controller":"transfer-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1647492875.650102,"logger":"controller","msg":"Starting EventSource","controller":"transfer-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1647492875.6501398,"logger":"controller","msg":"Starting EventSource","controller":"datavolume-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1647492875.6502995,"logger":"controller","msg":"Starting Controller","controller":"datavolume-controller"}
{"level":"info","ts":1647492875.7496388,"logger":"controller","msg":"Starting EventSource","controller":"config-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1647492875.7499032,"logger":"controller","msg":"Starting Controller","controller":"import-controller"}
{"level":"info","ts":1647492875.7503142,"logger":"controller","msg":"Starting EventSource","controller":"upload-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1647492875.7504363,"logger":"controller","msg":"Starting Controller","controller":"transfer-controller"}
{"level":"info","ts":1647492875.7507899,"logger":"controller","msg":"Starting EventSource","controller":"clone-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1647492875.751173,"logger":"controller","msg":"Starting Controller","controller":"clone-controller"}
{"level":"info","ts":1647492875.8501465,"logger":"controller","msg":"Starting EventSource","controller":"config-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1647492875.8502474,"logger":"controller","msg":"Starting workers","controller":"import-controller","worker count":1}
{"level":"info","ts":1647492875.8507664,"logger":"controller","msg":"Starting Controller","controller":"upload-controller"}
{"level":"info","ts":1647492875.8509867,"logger":"controller","msg":"Starting workers","controller":"datavolume-controller","worker count":1}
{"level":"info","ts":1647492875.851298,"logger":"controller.datavolume-controller","msg":"Creating PVC for datavolume","Datavolume":"cie/eip-cluster01-data-disk-60g-1"}
I0317 04:54:35.871073 1 util.go:317] GetFilesystemOverhead with PVC&PersistentVolumeClaim{ObjectMeta:{eip-cluster01-data-disk-60g-1 cie /api/v1/namespaces/cie/persistentvolumeclaims/eip-cluster01-data-disk-60g-1 67662900-538d-4db5-9dee-5dc03fdf7a97 490007 0 2022-03-17 04:54:35 +0000 UTC <nil> <nil> map[app:containerized-data-importer] map[cdi.kubevirt.io/storage.contentType:kubevirt cdi.kubevirt.io/storage.import.importPodName:importer-eip-cluster01-data-disk-60g-1 cdi.kubevirt.io/storage.import.source:none cdi.kubevirt.io/storage.pod.restarts:0 cdi.kubevirt.io/storage.preallocation.requested:false] [{cdi.kubevirt.io/v1beta1 DataVolume eip-cluster01-data-disk-60g-1 53067044-e20b-45bc-bb18-d8f312490829 0xc001c47d87 0xc001c47d88}] [kubernetes.io/pvc-protection] []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{64424509440 0} {<nil>} 60Gi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*hostpath-sas,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}
E0317 04:54:35.871342 1 util.go:343] CDIConfig filesystemOverhead used before config controller ran reconcile. Hopefully this only happens during unit testing.
I0317 04:54:35.871414 1 util.go:662] Applying PVC annotation on the podsidecar.istio.io/injectfalse
{"level":"info","ts":1647492875.89543,"logger":"controller.datavolume-controller","msg":"Creating PVC for datavolume","Datavolume":"cie/eip-cluster01-system-disk-0"}
I0317 04:54:35.907852 1 util.go:317] GetFilesystemOverhead with PVC&PersistentVolumeClaim{ObjectMeta:{eip-cluster01-system-disk-0 cie /api/v1/namespaces/cie/persistentvolumeclaims/eip-cluster01-system-disk-0 948b730e-2ee3-4245-b229-c346740e6e87 490023 0 2022-03-17 04:54:35 +0000 UTC <nil> <nil> map[app:containerized-data-importer] map[cdi.kubevirt.io/storage.contentType:kubevirt cdi.kubevirt.io/storage.import.endpoint:http://192.168.62.100:18080/system.qcow2 cdi.kubevirt.io/storage.import.importPodName:importer-eip-cluster01-system-disk-0 cdi.kubevirt.io/storage.import.source:http cdi.kubevirt.io/storage.pod.restarts:0 cdi.kubevirt.io/storage.preallocation.requested:false] [{cdi.kubevirt.io/v1beta1 DataVolume eip-cluster01-system-disk-0 c8a57451-0c81-4284-a0cb-9e4d28e54f4f 0xc0011e70d4 0xc0011e70d5}] [kubernetes.io/pvc-protection] []},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{42949672960 0} {<nil>} BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*hostpath-sas,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},}
E0317 04:54:35.908072 1 util.go:343] CDIConfig filesystemOverhead used before config controller ran reconcile. Hopefully this only happens during unit testing.
I0317 04:54:35.908132 1 util.go:662] Applying PVC annotation on the podsidecar.istio.io/injectfalse
{"level":"info","ts":1647492875.9133844,"logger":"controller.datavolume-controller","msg":"Creating PVC for datavolume","Datavolume":"cie/eip-cluster02-data-disk-60g-1"}
{"level":"error","ts":1647492875.9207766,"logger":"controller","msg":"Reconciler error","controller":"import-controller","name":"eip-cluster01-system-disk-0","namespace":"cie","error":"Operation cannot be fulfilled on persistentvolumeclaims \"eip-cluster01-system-disk-0\": the object has been modified; please apply your changes to the latest version and try again","stacktrace":"kubevirt.io/containerized-data-importer/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\tvendor/github.com/go-logr/zapr/zapr.go:128\nkubevirt.io/containerized-data-importer/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\tvendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:237\nkubevirt.io/containerized-data-importer/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\tvendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:209\nkubevirt.io/containerized-data-importer/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\tvendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:188\nkubevirt.io/containerized-data-importer/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155\nkubevirt.io/containerized-data-importer/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156\nkubevirt.io/containerized-data-importer/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nkubevirt.io/containerized-data-importer/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90"}
{"level":"info","ts":1647492875.9271662,"logger":"controller.datavolume-controller","msg":"Creating PVC for datavolume","Datavolume":"cie/eip-cluster02-system-disk-0"}
{"level":"info","ts":1647492875.9465973,"logger":"controller.datavolume-controller","msg":"Creating PVC for datavolume","Datavolume":"cie/cve-netnode01-data-disk-60g-1"}
{"level":"info","ts":1647492875.95064,"logger":"controller","msg":"Starting Controller","controller":"config-controller"}
{"level":"info","ts":1647492875.9506972,"logger":"controller","msg":"Starting workers","controller":"config-controller","worker count":1}
{"level":"info","ts":1647492875.9508371,"logger":"controller.config-controller","msg":"reconciling CDIConfig","CDIConfig":"/config"}
{"level":"info","ts":1647492875.9509177,"logger":"controller","msg":"Starting workers","controller":"upload-controller","worker count":1}
{"level":"info","ts":1647492875.9514794,"logger":"controller","msg":"Starting workers","controller":"transfer-controller","worker count":1}
{"level":"info","ts":1647492875.9515033,"logger":"controller","msg":"Starting workers","controller":"clone-controller","worker count":1}
{"level":"info","ts":1647492875.9548457,"logger":"controller.config-controller.CDIconfig.IngressReconcile","msg":"No ingress found, setting to blank","IngressURL":""}
{"level":"info","ts":1647492878.146274,"logger":"controller.config-controller.CDIconfig.RouteReconcile","msg":"No route found, setting to blank","RouteURL":""}
{"level":"info","ts":1647492878.1463864,"logger":"controller.config-controller.CDIconfig.StorageClassReconcile","msg":"No default storage class found, setting scratch space to blank"}
{"level":"info","ts":1647492878.146411,"logger":"controller.config-controller.CDIconfig.FilesystemOverhead","msg":"No filesystem overhead found in status, initializing to defaults"}
{"level":"info","ts":1647492878.1465144,"logger":"controller.datavolume-controller","msg":"Creating PVC for datavolume","Datavolume":"cie/cve-netnode01-system-disk-0"}
I0317 04:54:39.741636 1 request.go:621] Throttling request took 1.49710485s, request: GET:https://100.105.0.1:443/apis/admissionregistration.k8s.io/v1beta1?timeout=32s
{"level":"info","ts":1647492880.900997,"logger":"controller.config-controller","msg":"Updating CDIConfig","CDIConfig":"/config","CDIConfig.Name":"config","config":{"name":"config"}}
{"level":"info","ts":1647492880.9081478,"logger":"controller.config-controller","msg":"reconciling CDIConfig","CDIConfig":"/config"}
{"level":"info","ts":1647492880.9118192,"logger":"controller.config-controller.CDIconfig.IngressReconcile","msg":"No ingress found, setting to blank","IngressURL":""}
{"level":"info","ts":1647492883.643105,"logger":"controller.config-controller.CDIconfig.RouteReconcile","msg":"No route found, setting to blank","RouteURL":""}
{"level":"info","ts":1647492883.6432176,"logger":"controller.config-controller.CDIconfig.StorageClassReconcile","msg":"No default storage class found, setting scratch space to blank"}
{"level":"info","ts":1647492886.3920314,"logger":"controller.config-controller","msg":"Updating CDIConfig","CDIConfig":"/config","CDIConfig.Name":"config","config":{"name":"config"}}
{"level":"info","ts":1647492886.3921244,"logger":"controller.datavolume-controller","msg":"Creating PVC for datavolume","Datavolume":"cie/cve-netnode02-data-disk-60g-1"}
{"level":"info","ts":1647492886.398991,"logger":"controller.config-controller","msg":"reconciling CDIConfig","CDIConfig":"/config"}
{"level":"info","ts":1647492886.4025595,"logger":"controller.config-controller.CDIconfig.IngressReconcile","msg":"No ingress found, setting to blank","IngressURL":""}
{"level":"info","ts":1647492889.1405373,"logger":"controller.config-controller.CDIconfig.RouteReconcile","msg":"No route found, setting to blank","RouteURL":""}
{"level":"info","ts":1647492889.1406877,"logger":"controller.config-controller.CDIconfig.StorageClassReconcile","msg":"No default storage class found, setting scratch space to blank"}
{"level":"info","ts":1647492889.1407025,"logger":"controller.datavolume-controller","msg":"Creating PVC for datavolume","Datavolume":"cie/cve-netnode02-system-disk-0"}
I0317 04:54:50.285381 1 request.go:621] Throttling request took 1.047075834s, request: GET:https://100.105.0.1:443/apis/apiregistration.k8s.io/v1?timeout=32s
{"level":"info","ts":1647492891.8897398,"logger":"controller.config-controller","msg":"Updating CDIConfig","CDIConfig":"/config","CDIConfig.Name":"config","config":{"name":"config"}}
{"level":"error","ts":1647492925.934415,"logger":"controller","msg":"Reconciler error","controller":"datavolume-controller","name":"eip-cluster01-data-disk-60g-1","namespace":"cie","error":"Get https://100.101.186.62:8443/metrics: dial tcp 100.101.186.62:8443: i/o timeout","stacktrace":"kubevirt.io/containerized-data-importer/vendor/github.com/go-logr/zapr.(*zapLogger).Error\n\tvendor/github.com/go-logr/zapr/zapr.go:128\nkubevirt.io/containerized-data-importer/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\tvendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:237\nkubevirt.io/containerized-data-importer/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\tvendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:209\nkubevirt.io/containerized-data-importer/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\tvendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:188\nkubevirt.io/containerized-data-importer/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155\nkubevirt.io/containerized-data-importer/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156\nkubevirt.io/containerized-data-importer/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nkubevirt.io/containerized-data-importer/vendor/k8s.io/apimachinery/pkg/util/wait.Until\n\tvendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90"}
{"level":"info","ts":1647492926.9348845,"logger":"controller.datavolume-controller","msg":"Datavolume finished, no longer updating progress","Namespace":"cie","Name":"eip-cluster01-system-disk-0","Phase":"Succeeded"}
{"level":"info","ts":1647492927.0977364,"logger":"controller.datavolume-controller","msg":"Datavolume finished, no longer updating progress","Namespace":"cie","Name":"eip-cluster01-data-disk-60g-1","Phase":"Succeeded"}
@npu21 just want to confirm that you have HonorWaitForFirstConsumer
feature gate enabled as described here: https://github.com/kubevirt/containerized-data-importer/blob/main/doc/waitforfirstconsumer-storage-handling.md
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
/lifecycle rotten
@mhenriks Thanks for your reminding!In fact, we've already used HonorWaitForFirstConsumer
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
/close
@kubevirt-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity. Reopen the issue with
/reopen
. Mark the issue as fresh with/remove-lifecycle rotten
./close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen
@lobshunter: You can't reopen an issue/PR unless you authored it or you are a collaborator.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
In fact, I met a similiar issue.
I was trying to launch VM on a specific node. I set VirtualMachineInstance.spec.nodeselector
, ensures VMI will be scheduled on the node. But I cannot do the same thing on dateVolume. So they might end up on different node.