aws-fsx-csi-driver icon indicating copy to clipboard operation
aws-fsx-csi-driver copied to clipboard

Unable to mount FSx as PV from a peered VPC

Open satishpasumarthi opened this issue 1 year ago • 5 comments

/kind bug

What happened? Hi, I've been looking into getting the PV and PVC setup for pods using the FSx in a peered account. I am unable to get the FSx mounted as PV: Currently the aws-fsx-csi-driver helps with PV and PVC setup -- It takes dnsname into account for Mounting. But we can't mount FSx (peered vpc) using dnsname, but rather IP address of FSx should be used because we can't use dnsname for FSx (From a peered vpc) What's the recommended way to accomplish this? What you expected to happen? Consume FSx ip address and mount the volume How to reproduce it (as minimally and precisely as possible)? Try PV and PVC for static FSx volume using the dnsname. The FSx volume should be from a peered VPC.

Anything else we need to know?:

Environment: AWS EKS

  • Kubernetes version (use kubectl version): 1.27
  • Driver version: 1.0

satishpasumarthi avatar Oct 09 '23 19:10 satishpasumarthi

specifying ip address in the place of dnsname worked. Please improve the documentation.

satishpasumarthi avatar Oct 09 '23 21:10 satishpasumarthi

yea I had the same issue with DNS, just changed to the FSX -> ESI -> IP address and all worked well. you can find this IP from a simple nslookup in the same vpc.

changing to the IP address in the mount command worked perfectly

jonathonbyrdziak avatar Oct 18 '23 17:10 jonathonbyrdziak

In short , use IP instead of DNS. Find this document helpful : https://docs.aws.amazon.com/fsx/latest/LustreGuide/mounting-on-premises.html

paramjeet01 avatar Nov 04 '23 13:11 paramjeet01

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Feb 02 '24 14:02 k8s-triage-robot

/lifecycle frozen docs update needed

jacobwolfaws avatar Feb 27 '24 15:02 jacobwolfaws