aws-fsx-csi-driver
aws-fsx-csi-driver copied to clipboard
Unable to mount FSx as PV from a peered VPC
/kind bug
What happened? Hi, I've been looking into getting the PV and PVC setup for pods using the FSx in a peered account. I am unable to get the FSx mounted as PV: Currently the aws-fsx-csi-driver helps with PV and PVC setup -- It takes dnsname into account for Mounting. But we can't mount FSx (peered vpc) using dnsname, but rather IP address of FSx should be used because we can't use dnsname for FSx (From a peered vpc) What's the recommended way to accomplish this? What you expected to happen? Consume FSx ip address and mount the volume How to reproduce it (as minimally and precisely as possible)? Try PV and PVC for static FSx volume using the dnsname. The FSx volume should be from a peered VPC.
Anything else we need to know?:
Environment: AWS EKS
- Kubernetes version (use
kubectl version
): 1.27 - Driver version: 1.0
specifying ip address in the place of dnsname worked. Please improve the documentation.
yea I had the same issue with DNS, just changed to the FSX -> ESI -> IP address and all worked well. you can find this IP from a simple nslookup in the same vpc.
changing to the IP address in the mount command worked perfectly
In short , use IP instead of DNS. Find this document helpful : https://docs.aws.amazon.com/fsx/latest/LustreGuide/mounting-on-premises.html
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/lifecycle frozen docs update needed