azuredisk-csi-driver
azuredisk-csi-driver copied to clipboard
[V2] azuredisk_csi_driver_controller_publish_volume latency metrics is incorrect in v2 driver
What happened:
In ControllerPublishVolume
func, disk attach is async call, so azuredisk_csi_driver_controller_publish_volume
latency metrics is incorrect in v2 driver now, need to get actuall disk attach latency, same issue for azuredisk_csi_driver_controller_unpublish_volume
latency metrics
azure_metrics.go:114] "Observed Request Latency" latency_seconds=0.000512805 request="azuredisk_csi_driver_controller_publish_volume" resource_group="kubetest-rfl35lck" subscription_id="0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e" source="disk.csi.azure.com" volumeid="/subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-rfl35lck/providers/Microsoft.Compute/disks/pvc-065ce6f7-649d-4bd2-af11-c15b48e8e556" node="k8s-agentpool-18545342-vmss000001" result_code="succeeded"
cc @edreed
What you expected to happen:
How to reproduce it:
Anything else we need to know?:
Environment:
- CSI Driver version:
- Kubernetes version (use
kubectl version
): - OS (e.g. from /etc/os-release):
- Kernel (e.g.
uname -a
): - Install tools:
- Others:
/assign @shlokshah-dev
@landreasyan: GitHub didn't allow me to assign the following users: shlokshah-dev.
Note that only kubernetes-sigs members, repo collaborators and people who have commented on this issue/PR can be assigned. Additionally, issues/PRs can only have 10 assignees at the same time. For more information please see the contributor guide
In response to this:
/assign @shlokshah-dev
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
The issue is fixed for azuredisk_csi_driver_controller_publish_volume
by #1531. New attach_volume_latency
is added to track the latency of the actual attach operation.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale