cloud-provider-openstack
cloud-provider-openstack copied to clipboard
[Manila-CSI-Plugin] How to prevent plugin to send parameter nfs-shareClient 0.0.0.0/0
Is this a BUG REPORT or FEATURE REQUEST?:
/kind feature
What happened: When using Manila-CSI-Plugin NFS, the default behavior is to send nfs-shareClient 0.0.0.0/0 to openstack manila plugin and this creates an access/permission config change in our NFS server storage backend.
What you expected to happen: We would like to know if there is a way to configure Manila-CSI-Plugin storage class to not send any nfs-shareClient list to openstack manila plugin so the manila openstack access-list is empty for the share created by CSI Manila plugin i.e.
Instead of:
host1:~> manila access-list 082f8726-3cd8-47bf-ab38-fce6a81514ad +--------------------------------------+-------------+-----------+--------------+--------+------------+----------------------------+------------+ | id | access_type | access_to | access_level | state | access_key | created_at | updated_at | +--------------------------------------+-------------+-----------+--------------+--------+------------+----------------------------+------------+ | 9791bd6d-71aa-449c-aad2-2ad8310dbd2d | ip | 0.0.0.0/0 | rw | active | None | 2023-03-30T05:31:13.000000 | None | +--------------------------------------+-------------+-----------+--------------+--------+------------+----------------------------+------------+
We would like to have an empty access-list in openstack for the share created by Manila-CSI-Plugin:
host1:~> manila access-list 082f8726-3cd8-47bf-ab38-fce6a81514ad +----+-------------+-----------+--------------+-------+------------+------------+------------+ | id | access_type | access_to | access_level | state | access_key | created_at | updated_at | +----+-------------+-----------+--------------+-------+------------+------------+------------+ +----+-------------+-----------+--------------+-------+------------+------------+------------+
How to reproduce it: N/A
Anything else we need to know?: N/A
Environment:
- openstack-cloud-controller-manager(or other related binary) version: v1.25.3-5-ae26c257
- OpenStack version: Kolla
- Others: K8s version v1.25.3
Best Regards
Hi @tintin-63,
What's the use case of doing this? You can customize "nfsShareClient" to match specific node/s. Any reason not to use it at all?
Hello,
In openstack, we use manila Nexenta plugin to interface with Nexenta storage backend. By default, new NFS share created in storage will inherit parent folder access and options defined in Nexenta. However, if manila nexenta driver receives a manila share creation with a defined IP access-list (sent from CSI manila plugin in k8s), it will overwrite all access and options with that client IP value, instead of defaulting to inherited options defined in Nexenta top/parent NFS folder. I was just curious to see if there was already an option in CSI manila plugin to not send any NFS client access data.
Thanks
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
@tintin-63 how are you going to use NFS shares, when they don't have an access_to? BTW, are you aware that you can define a custom storage class with nfs-shareClient: 255.255.255.255/32 and your share won't be accessible?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
I think we haven't done this because of lack of time.. but, this issue is still relevant
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.