cluster-api-provider-azure
cluster-api-provider-azure copied to clipboard
Support disableLocalAccounts for AKS clusters
/kind feature
Describe the solution you'd like [A clear and concise description of what you want to happen.] Add support for setting DisableLocalAccounts. This is a security feature that prevents users from fetching the admin kubeconfig which has a 2 year certificate with system:masters privileges.
In order to support this, CAPZ will need to be changed to fetch user kubeconfig instead of admin kubeconfig in the GetCredentials function.
Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.] Note that this is only relevant for AKS clusters
/area managedclusters
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue or PR with
/reopen - Mark this issue or PR as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue or PR with
/reopen- Mark this issue or PR as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/remove-lifecycle rotten /reopen
@CecileRobertMichon: Reopened this issue.
In response to this:
/remove-lifecycle rotten /reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen
@CecileRobertMichon: Reopened this issue.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/remove-lifecycle rotten
When we disable Localaccounts,
We will not have access to the target AKS cluster via admin kubeconfig.
We need to use the below api in capz to fetch the user credentials
https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4#ManagedClustersClient.ListClusterUserCredentials
The kubeconfig that gets generated has the details of the user embeded in the kubeconfig as shown below.
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: <cert-data>
server: <aks-api-server-endpoint>
name: aks-test
contexts:
- context:
cluster: aks-test
user: clusterUser_test-rg_aks-test
name: aks-test
current-context: aks-test
kind: Config
preferences: {}
users:
- name: clusterUser_test-rg_aks-test
user:
exec:
apiVersion: client.authentication.k8s.io/v1beta1
args:
- get-token
- --environment
- AzurePublicCloud
- --server-id
- <server-id>
- --client-id
- <client-id>
- --tenant-id
- <tenant-id>
- --login
- devicecode
command: kubelogin
env: null
installHint: |2
kubelogin is not installed which is required to connect to AAD enabled cluster.
To learn more, please go to https://aka.ms/aks/kubelogin
provideClusterInfo: false
This becomes a blocker for a service to authenticate with the aks cluster as it will require kubelogin.
Below errors where observed from capi-controller-manager pod to reach the target cluster with the above kubeconfig.
E0905 20:09:29.509244 1 controller.go:326] "Reconciler error" err=<
failed to create client for Cluster rnl-ns/azure-spc-test: Get "[https://azure-spc-test-83jmqdw3.hcp.eastus.azmk8s.io:443/api?timeout=10s](https://azure-spc-test-83jmqdw3.hcp.eastus.azmk8s.io/api?timeout=10s)": getting credentials: exec: executable kubelogin not found
It looks like you are trying to use a client-go credential plugin that is not installed.
To learn more about this feature, consult the documentation available at:
https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins
kubelogin is not installed which is required to connect to AAD enabled cluster.
To learn more, please go to https://aka.ms/aks/kubelogin
> controller="machinepool" controllerGroup="cluster.x-k8s.io" controllerKind="MachinePool" MachinePool="rnl-ns/workerpool01234" namespace="rnl-ns" name="workerpool01234" reconcileID=b9f01900-7dbd-49a3-a2ff-2f13c4e6d064
E0905 20:09:29.533544 1 controller.go:326] "Reconciler error" err=<
failed to create client for Cluster rnl-ns/azure-spc-test: Get "[https://azure-spc-test-83jmqdw3.hcp.eastus.azmk8s.io:443/api?timeout=10s](https://azure-spc-test-83jmqdw3.hcp.eastus.azmk8s.io/api?timeout=10s)": getting credentials: exec: executable kubelogin not found
It looks like you are trying to use a client-go credential plugin that is not installed.
To learn more about this feature, consult the documentation available at:
https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins
kubelogin is not installed which is required to connect to AAD enabled cluster.
To learn more, please go to https://aka.ms/aks/kubelogin
To resolve the above issue,
We can install the kubelogin as a part of capz pod.
get the token using the below command
kubelogin get-token -l spn --server-idkubelogin <server-id> --client-id <client-id> --client-secret <client-secret>
Generate a kubeconfig using the token as shown below.
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: <cert-data>
server: <aks-api-server-endpoint>
name: aks-test
contexts:
- context:
cluster: aks-test
user: clusterUser_test-rg_aks-test
name: aks-test
current-context: aks-test
kind: Config
preferences: {}
users:
- name: clusterUser_test-rg_aks-test
user:
token: <token-obtained from kubelogin>
@CecileRobertMichon @mboersma @alexeldeib thoughts ??
Found a better way to do it, digged through the kubelogin code and figured out they use the below api to get the token
https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity#AzureCLICredential.GetToken
We can implement it in a similar way as shown below. :) https://github.com/Azure/kubelogin/blob/c4cf27c62a41a89130efa15dba61f5401d61b814/pkg/token/serviceprincipaltokensecret.go#L17-#L51