ceph-csi
ceph-csi copied to clipboard
Can I use iscsi storage by the csi?
Describe the bug
A clear and concise description of what the bug is.
Environment details
- Image/version of Ceph CSI driver :
- Helm chart version :
- Kernel version :
- Mounter used for mounting PVC (for cephfs its
fuse
orkernel
. for rbd itskrbd
orrbd-nbd
) : - Kubernetes cluster version :
- Ceph cluster version :
Steps to reproduce
Steps to reproduce the behavior:
- Setup details: '...'
- Deployment to trigger the issue '....'
- See error
Actual results
Describe what happened
Expected behavior
A clear and concise description of what you expected to happen.
Logs
If the issue is in PVC creation, deletion, cloning please attach complete logs of below containers.
- csi-provisioner and csi-rbdplugin/csi-cephfsplugin container logs from the provisioner pod.
If the issue is in PVC resize please attach complete logs of below containers.
- csi-resizer and csi-rbdplugin/csi-cephfsplugin container logs from the provisioner pod.
If the issue is in snapshot creation and deletion please attach complete logs of below containers.
- csi-snapshotter and csi-rbdplugin/csi-cephfsplugin container logs from the provisioner pod.
If the issue is in PVC mounting please attach complete logs of below containers.
-
csi-rbdplugin/csi-cephfsplugin and driver-registrar container logs from plugin pod from the node where the mount is failing.
-
if required attach dmesg logs.
Note:- If its a rbd issue please provide only rbd related logs, if its a cephfs issue please provide cephfs logs.
Additional context
Add any other context about the problem here.
For example:
Any existing bug report which describe about the similar issue/behavior
@fanzetian not yet we are ready. #https://github.com/ceph/ceph-csi/issues/2003 has the active progress on this front.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions.
This issue has been automatically closed due to inactivity. Please re-open if this still requires investigation.
This issue has been automatically closed due to inactivity. Please re-open if this still requires investigation.
This issue has been automatically closed due to inactivity. Please re-open if this still requires investigation.
@humblec any plan to work on it? if not feel free to unassing/close it.
will reevaluate and confirm
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in a week if no further activity occurs. Thank you for your contributions.
This issue has been automatically closed due to inactivity. Please re-open if this still requires investigation.