cloud-provider-openstack icon indicating copy to clipboard operation
cloud-provider-openstack copied to clipboard

[Manila-CSI-Plugin] How to prevent plugin to send parameter nfs-shareClient 0.0.0.0/0

Open tintin-63 opened this issue 2 years ago • 9 comments

Is this a BUG REPORT or FEATURE REQUEST?:

/kind feature

What happened: When using Manila-CSI-Plugin NFS, the default behavior is to send nfs-shareClient 0.0.0.0/0 to openstack manila plugin and this creates an access/permission config change in our NFS server storage backend.

What you expected to happen: We would like to know if there is a way to configure Manila-CSI-Plugin storage class to not send any nfs-shareClient list to openstack manila plugin so the manila openstack access-list is empty for the share created by CSI Manila plugin i.e.

Instead of:

host1:~> manila access-list 082f8726-3cd8-47bf-ab38-fce6a81514ad +--------------------------------------+-------------+-----------+--------------+--------+------------+----------------------------+------------+ | id | access_type | access_to | access_level | state | access_key | created_at | updated_at | +--------------------------------------+-------------+-----------+--------------+--------+------------+----------------------------+------------+ | 9791bd6d-71aa-449c-aad2-2ad8310dbd2d | ip | 0.0.0.0/0 | rw | active | None | 2023-03-30T05:31:13.000000 | None | +--------------------------------------+-------------+-----------+--------------+--------+------------+----------------------------+------------+

We would like to have an empty access-list in openstack for the share created by Manila-CSI-Plugin:

host1:~> manila access-list 082f8726-3cd8-47bf-ab38-fce6a81514ad +----+-------------+-----------+--------------+-------+------------+------------+------------+ | id | access_type | access_to | access_level | state | access_key | created_at | updated_at | +----+-------------+-----------+--------------+-------+------------+------------+------------+ +----+-------------+-----------+--------------+-------+------------+------------+------------+

How to reproduce it: N/A

Anything else we need to know?: N/A

Environment:

  • openstack-cloud-controller-manager(or other related binary) version: v1.25.3-5-ae26c257
  • OpenStack version: Kolla
  • Others: K8s version v1.25.3

Best Regards

tintin-63 avatar Apr 18 '23 15:04 tintin-63

Hi @tintin-63,

What's the use case of doing this? You can customize "nfsShareClient" to match specific node/s. Any reason not to use it at all?

gouthampacha avatar Apr 18 '23 16:04 gouthampacha

Hello,

In openstack, we use manila Nexenta plugin to interface with Nexenta storage backend. By default, new NFS share created in storage will inherit parent folder access and options defined in Nexenta. However, if manila nexenta driver receives a manila share creation with a defined IP access-list (sent from CSI manila plugin in k8s), it will overwrite all access and options with that client IP value, instead of defaulting to inherited options defined in Nexenta top/parent NFS folder. I was just curious to see if there was already an option in CSI manila plugin to not send any NFS client access data.

Thanks

tintin-63 avatar Apr 18 '23 17:04 tintin-63

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jul 17 '23 17:07 k8s-triage-robot

/remove-lifecycle stale

@tintin-63 how are you going to use NFS shares, when they don't have an access_to? BTW, are you aware that you can define a custom storage class with nfs-shareClient: 255.255.255.255/32 and your share won't be accessible?

kayrus avatar Jul 17 '23 18:07 kayrus

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jan 24 '24 17:01 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Feb 23 '24 17:02 k8s-triage-robot

/remove-lifecycle rotten

I think we haven't done this because of lack of time.. but, this issue is still relevant

gouthampacha avatar Feb 27 '24 22:02 gouthampacha

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar May 27 '24 22:05 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Jun 26 '24 23:06 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Jul 26 '24 23:07 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

k8s-ci-robot avatar Jul 26 '24 23:07 k8s-ci-robot