Does aws-iam-authenticator supports AWS SDK V2 to connect to cluster.
AWS SDK V2 does not have session.Session package and while creating clientSet need to pass session.Session object. How to handle this for aws SDK V2. How to create this sess object to pass inside GetTokenOptions.
gen, err := token.NewGenerator(true, false)
opts := &token.GetTokenOptions{
ClusterID: aws.StringValue(cluster.Name),
**Session: sess,**
}
tok, err := gen.GetWithOptions(opts)
clientset, err := kubernetes.NewForConfig(
&rest.Config{
Host: aws.StringValue(cluster.Endpoint),
BearerToken: tok.Token,
TLSClientConfig: rest.TLSClientConfig{
CAData: ca,
},
},
)
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
You see, my little bot friend, the +1 reactions have been invented in order to track engagement instead of commenting on the issues. You should check that through the API. :)
In any case, not stale, I +1 this.
/remove-lifecycle stale
For the record, @sahajavidya if you wish to support your cluster, you can write your own authentication and not use this project if all you need is a Token. Check this PR for a sample using a Presign token generator using pure aws-go-sdk-v2: https://github.com/weaveworks/eksctl/pull/5016/files#diff-73c5fd3d701e0ae7a76ddde1b6319a989fd7ebae87a9f1f2241d9c73d9dd1d17R1
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue or PR with
/reopen - Mark this issue or PR as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue or PR with
/reopen- Mark this issue or PR as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
I just hit this and ended up writing a thing that converts a v2 config to a v1 session. After you do this, you can just use the v1 aws-iam-authenticator package. This obviously does not fix the issue of the aws-iam-authenticator using an old SDK version, but it does let people with V2 code use this package a bit longer.
import (
"context"
"fmt"
"github.com/aws/aws-sdk-go-v2/aws"
v1aws "github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/credentials"
"github.com/aws/aws-sdk-go/aws/session"
)
// ConvertAWSConfigToV1Session converts a v2 aws config into a v1 session so that
// older v1 SDK functions can be used with a v2 session
func ConvertAWSConfigToV1Session(ctx context.Context, cfg *aws.Config) (*session.Session, error) {
// fetch the credentials from the v2 configuration
v2creds, _ := cfg.Credentials.Retrieve(ctx)
// v2token := v2creds.SessionToken // a token from a v2 config!
// create v1 static credentials with the v2 token
v1creds := credentials.NewStaticCredentials(v2creds.AccessKeyID, v2creds.SecretAccessKey, v2creds.SessionToken)
// init a v1 session with the static credetials
v1sess, err := session.NewSession(&v1aws.Config{
Region: &cfg.Region,
Credentials: v1creds,
})
if err != nil {
return nil, fmt.Errorf("error creating v1 session from v2 credentials: %w", err)
}
return v1sess, nil
}