aws-efs-csi-driver
aws-efs-csi-driver copied to clipboard
Values reported for kubelet_volume_stats_capacity_bytes are VERY wrong
/kind bug
What did you do
opted in to volume metrics by setting the flag --vol-metrics-opt-in
to true
and queried metric kubelet_volume_stats_capacity_bytes
What you expected to happen?
I expected reported metrics to be truthful. kubelet_volume_stats_used_bytes
seems to be correct, kubelet_volume_stats_capacity_bytes
is certainly wrong though (9 Exabytes displayed vs 5Gi actual in my case)
How to reproduce it (as minimally and precisely as possible)?
- have any sort of efs volume
- query the metrics endpoint of the driver for the value of
kubelet_volume_stats_capacity_bytes
- compare with actual values
Anything else we need to know?:
other flags are:
- --endpoint=$(CSI_ENDPOINT)
- --logtostderr
- --v=2
- --vol-metrics-opt-in=true
- --vol-metrics-refresh-period=240
- --vol-metrics-fs-rate-limit=5
Environment
- Kubernetes version v1.28.4-eks-8cb36c9
- Driver versions:
- csi-node-driver-registrar:v2.8.0-eks-1-27-3
- aws-efs-csi-driver:v1.7.1
- csi-provisioner:v3.5.0-eks-1-27-3
Please also attach debug logs to help us better diagnose
No logs but screenshots:
the sizes of volumes as defined: (5Gi and 1 Gi):
Reported volumes sizes are either 9EB or 0:
- Instructions to gather debug logs can be found here
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
this is expected, kubernetes requires setting a size, but EFS grows dynamically, if once you hit 1GB you can keep adding data, hence the actual limit is huge as reported, unlike EBS where size is used to give you a volume of a specific size, in other words, this doesn't apply to EBS but still has to be added to comply w/ k8s spec