Inconsistency with go-client Azure auth
Not sure yet if this is a bug or not, hopefully someone more familiar with AKS might be able to help me sort that part out. :)
One of our users reported that token refresh on AKS cluster does not work. The kubeconfig for the problematic cluster is:
apiVersion: v1
kind: Config
preferences: {}
current-context: my-context
clusters:
- name: my-cluster
cluster:
certificate-authority-data: >-
<base 64 encoded certificate data>
server: 'https://some-id.hcp.eastus.azmk8s.io:443'
insecure-skip-tls-verify: false
contexts:
- name: my-context
context:
cluster: my-context
user: some-user
namespace: app
users:
- name: some-user
user:
auth-provider:
config:
access-token: >-
<JWT access token>
apiserver-id: <uuid>
client-id: <uuid>
environment: AzurePublicCloud
expires-in: '3600'
expires-on: '1571778882'
refresh-token: >-
<refresh token>
tenant-id: <uuid>
name: azure
Looking at the Azure auth here makes me think this sort of auth is not really supported at all. To me it looks like this is some sort of AKS / Azure AD auth integration. All my limited trials with AKS has been using cert authentication so haven't really stumbled anything like this before.
Then looking at the go-client auth for Azure here it seems like it is able to use this sort of auth config.
So this raises up two questions:
- Should this sort of auth config even work currently? If not, is there any workarounds we could do for short term?
- Why is the JS client Azure auth so totally different from go-client counterpart? Or am I looking at wrong pieces of code?
Then the bigger question/topic: As there's gazillion different permutations how auth can be configured in Kubeconfig and all the different libraries in different languages seem to be playing catch-up with each other on those, has there been any discussion of building something that could be generally usable for all?
A temporary solution that worked for me was to update the auth config before using it
if (user.authProvider && user.authProvider.config) {
user.authProvider.config.expiry = new Date(Number(user.authProvider.config["expires-on"]) * 1000).toDateString();
user.authProvider.config["cmd-path"] = "az";
user.authProvider.config["cmd-args"] = `account get-access-token`;
user.authProvider.config["token-key"] = "$['accessToken']";
user.authProvider.config["expiry-key"] = "$['expiresOn']";
}
The main issue with this solution is that the fresh access token is never stored in the k8s config file which means the next time config is loaded from file it'll request a new access token again which can get annoying if you're doing it often.
It would be great to have fully integrated Azure auth built into this client!
Yep, this is a limitation in the current cloud auth code. We'd gladly welcome a PR that performed the right token regeneration using the Azure ADAL libraries.
@brendandburns Is this the right lib? https://github.com/AzureAD/azure-activedirectory-library-for-js
@chelnak Yeah, I think you need acquireToken
https://github.com/AzureAD/azure-activedirectory-library-for-js/blob/master/lib/adal.js#L637
A PR for this would be great!
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
/lifecycle frozen