containerized-data-importer
containerized-data-importer copied to clipboard
operator does not work with error config map cdi-apiserver-signer-bundle is not found
What happened: I installed the newest release 1.58.1 by downloading the yaml-files and apply the operator and the CR.
The first one start a deployment (and therefore a pod) as expected waiting for the CR to be applied. After this took place, nothing happens or at least there are not new pods coming up. The logs complain that the CM cdi-apiserver-signer-bundle is not available. This is correct as no one have created it. (the code looks like the operator should do this...)
What you expected to happen: I should be able to create a datavolume after applying the cdi operator
How to reproduce it (as minimally and precisely as possible):
follow the kubevirt documentation:
- export TAG=$(curl -s -w %{redirect_url} https://github.com/kubevirt/containerized-data-importer/releases/latest)
- export VERSION=$(echo ${TAG##*/})
- wget https://github.com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-operator.yaml
- add a nodeselector to run the cdi deployment on the amd64 node, as the operator is not a multi arch image
- kubectl create -f cdi-operator.yaml
- kubectl create -f https://github.com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-cr.yaml
Additional context: I attached the logs from the operator deployment logs.txt
Environment:
- CDI version (use
kubectl get deployments cdi-deployment -o yaml): 1.58.1 - Kubernetes version (use
kubectl version): 1.26.9 - DV specification: N/A
- Cloud provider or hardware configuration: linux/amd64 as control node, one amd64 worker and one arm64 worker
- OS (e.g. from /etc/os-release): ubuntu 22.04
- Kernel (e.g.
uname -a): Linux wall-e 5.15.0-92-generic #102-Ubuntu SMP Wed Jan 10 09:33:48 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux - Install tools: N/A
- Others: N/A
Is this a fresh install or had you previously installed CDI on this cluster?
The complaints about the ConfigMap seem like a red herring, with the actual issue being
{"level":"info","ts":"2024-02-04T18:42:00Z","logger":"cdi-operator","msg":"Orphan object exists","Request.Namespace":"","Request.Name":"cdi","obj":{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRole","name":"cdi-apiserver"}}
If CDI was previously not clean uninstalled, these leftovers would prevent reinstalling it
yes there was an old installation. I tried to clean up everything as good as possible. Is there a documentation what object must not be there?
yes there was an old installation. I tried to clean up everything as good as possible. Is there a documentation what object must not be there?
All a user has to do is delete the CDI custom resource, and that will take care of removing the objects. If the custom resource was cleaned up successfully, there should not be leftovers
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
/lifecycle stale
/close uninstalling CDI should work by deleting the CDI CR
@akalenyu: Closing this issue.
In response to this:
/close uninstalling CDI should work by deleting the CDI CR
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.