containerized-data-importer
containerized-data-importer copied to clipboard
Empty DataVolume is not creating
What happened: I am creating empty DataVolume to upload disk image but getting Internal error occurred: failed calling webhook "datavolume-mutate.cdi.kubevirt.io" error
What you expected to happen: It should create the Empty DataVolume How to reproduce it (as minimally and precisely as possible): Steps to reproduce the behavior.
Additional context:
Error: Error from server (InternalError): error when creating "upload-dv.yaml": Internal error occurred: failed calling webhook "datavolume-mutate.cdi.kubevirt.io": failed to call webhook: Post "https://cdi-api.cdi.svc:443/datavolume-mutate?timeout=30s": unexpected EOF
kubevirt version:- v1.3.0 Also use Openebs for local storageclass in Kubernetes cluster
This is the yaml file to create empty datavolume apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: empty-dv annotations: cdi.kubevirt.io/storage.bind.immediate.requested: "true" spec: source: upload: {} pvc: storageClassName: openebs-hostpath accessModes: - ReadWriteOnce resources: requests: storage: 200Gi
Environment:
- CDI version (use
kubectl get deployments cdi-deployment -o yaml): v1.60.3 A- Kubernetes version (usekubectl version): v1.30.13 - DV specification: N/A
- Cloud provider or hardware configuration: On-prem cluster
- OS (e.g. from /etc/os-release): N/A
- Kernel (e.g.
uname -a): N/A - Install tools: N/A
- Others: N/A
Based on the error message it looks like there may be an issue with your deployment. Can you check that all of the CDI pods are shown as ready? Another thing to check is if you have any cluster network policies that might be preventing the CDI apiserver component from talking to the kubernetes apiserver.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
/close
@kubevirt-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity. Reopen the issue with
/reopen. Mark the issue as fresh with/remove-lifecycle rotten./close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.