gcp-filestore-csi-driver icon indicating copy to clipboard operation
gcp-filestore-csi-driver copied to clipboard

FIlestore CSI driver doesn't work on Shared VPC

Open mahmoudcb opened this issue 5 years ago • 9 comments

From documentation

Filestore instances on the shared VPC network cannot be created from service projects If you attempt to create a Filestore instance from a service project that is attached to a shared VPC host project, the shared VPC network is not listed under Authorized network. Similarly, attempting to create the instance using gcloud or the REST API results in the following error: ERROR: (gcloud.filestore.instances.create) INVALID_ARGUMENT: network '[SHARED_VPC_NETWORK]' does not exist. Workaround You can create Filestore instances from the host project with the shared VPC as the authorized network. Once created, clients in any service project can mount the instance as usual. The caveats to this workaround include: The host project must be involved in the creation of Filestore instances. Costs for Filestore instances are charged to the host project instead of the service projects that use them.

E0604 18:10:37.941382       1 utils.go:55] GRPC error: rpc error: code = Internal desc = CreateInstance operation failed: googleapi: Error 400: network 'shared-vpc' does not exist., badRequest

So the question is, How can I run CSI controller on GKE on Project Y (which is the service project) with shared VPC on project X (the host project)?

mahmoudcb avatar Jun 04 '20 18:06 mahmoudcb

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot avatar Sep 02 '20 19:09 fejta-bot

/assign @saikat-royc

msau42 avatar Sep 02 '20 19:09 msau42

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

fejta-bot avatar Oct 02 '20 20:10 fejta-bot

/remove-lifecycle rotten

msau42 avatar Oct 02 '20 21:10 msau42

@mahmoudcb

If you use a host project service account for GCFS_SA_FILE when deploying the driver (deploy/kubernetes/cluster_setup.sh), then I think it should work. Note this means you have to change what happens in deploy/project_setup.sh as well.

Hopefully that is a good workaround for you while we figure out a better solution.

mattcary avatar Oct 07 '20 20:10 mattcary

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot avatar Jan 05 '21 20:01 fejta-bot

/lifecycle frozen

We should add a knob for changing the project used to dynamically provision instances from the cluster project to an arbitrary project (maybe from the storageclass). The SA would still have to be set up correctly.

mattcary avatar Jan 13 '21 22:01 mattcary

For anyone else running into this issue, you can do this with private service access by setting these options on the StorageClass definition:

parameters:
  network: projects/<VPC HOST PROJECT ID>/global/networks/<VPC NAME>
  connect-mode: PRIVATE_SERVICE_ACCESS

Reference: https://cloud.google.com/filestore/docs/shared-vpc

mikesmitty avatar Feb 08 '22 18:02 mikesmitty

/remove-lifecycle stale

ghost avatar Feb 08 '22 19:02 ghost