azure-k8s-metrics-adapter
azure-k8s-metrics-adapter copied to clipboard
No token file in the metrics adapter pod
I have a K8S cluster in Azure that was created with Terraform.
Trying to deploy the metrics adapter according to the instruction, and it gets deployed, but its pod in custom-metrics namespace fails with the following log line:
unable to construct client config: unable to construct lister client config to initialize provider: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
My service accounts look like:
$ kubectl describe serviceaccounts
Name: azure-k8s-metrics-adapter
Namespace: custom-metrics
Labels: app=azure-k8s-metrics-adapter
chart=azure-k8s-metrics-adapter-0.1.0
heritage=Tiller
release=azure-k8s-metrics-adapter
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"ServiceAccount","metadata":{"annotations":{},"labels":{"app":"azure-k8s-metrics-adapter","chart":"azure-k8s-met...
Image pull secrets: <none>
Mountable secrets: azure-k8s-metrics-adapter-token-hcdq4
Tokens: azure-k8s-metrics-adapter-token-hcdq4
Events: <none>
Name: default
Namespace: custom-metrics
Labels: <none>
Annotations: <none>
Image pull secrets: <none>
Mountable secrets: default-token-2pwlk
Tokens: default-token-2pwlk
Events: <none>
When I look at Configuration/Secrets I see that azure-k8s-metrics-adapter (Opaque), azure-k8s-metrics-adapter-token-hcdq4 (service-account-token) and default-token-2pwlk are present.
The cluster has been created using the following expression:
resource "azurerm_kubernetes_cluster" "main" {
name = "${var.prefix}-cluster"
location = azurerm_resource_group.main.location
resource_group_name = azurerm_resource_group.main.name
dns_prefix = var.prefix
default_node_pool {
name = "default"
node_count = 1
vm_size = "Standard_DS2_v2"
os_disk_size_gb = 30
vnet_subnet_id = azurerm_subnet.aks.id
}
network_profile {
network_plugin = "azure"
}
addon_profile {
aci_connector_linux {
enabled = true
subnet_name = azurerm_subnet.aci.name
}
}
role_based_access_control {
enabled = true
}
service_principal {
client_id = azuread_application.main.application_id
client_secret = azuread_service_principal_password.main.value
}
}
Did I miss something in configuration? How do I make it work?
Kubernetes version 1.15.10:
- [x] Running on AKS