external-dns icon indicating copy to clipboard operation
external-dns copied to clipboard

Pod Identity still not working

Open AleCo3lho opened this issue 1 year ago • 5 comments
trafficstars

What happened:

External DNS pod can't retrieve credentials

2024/04/01 15:46:53 Ignoring, HTTP credential provider invalid endpoint host, "169.254.170.23", only loopback hosts are allowed. <nil>
time="2024-04-01T15:46:53Z" level=fatal msg="records retrieval failed: failed to list hosted zones: NoCredentialProviders: no valid providers in chain. Deprecated.\n\tFor verbose messaging see aws.Config.CredentialsChainVerboseErrors"

What you expected to happen:

External DNS to retrieve the credentials

How to reproduce it (as minimally and precisely as possible):

Run External DNS with pod identity in eks

Anything else we need to know?:

Environment:

  • External-DNS version (use external-dns --version):
    • 0.14.1
  • DNS provider:
    • Route53
  • Others:

AleCo3lho avatar Apr 01 '24 15:04 AleCo3lho

any news on this? I have the same issue and wonder why is this happen

laiminhtrung1997 avatar Apr 11 '24 04:04 laiminhtrung1997

any news on this? I have the same issue and wonder why is this happen

Hey @laiminhtrung1997 how are you man? So I have been testing and basically if you have a container registry you can change the NewSession function at provider/aws/session.go and remove the config variable from the session.NewSessionWithOptions props, like this.

	session, err := session.NewSessionWithOptions(session.Options{
		SharedConfigState: session.SharedConfigEnable,
	})

By doing that, you can use

make build.push IMAGE=your-registry/external-dns

To upload the image to you container image registry. I will be creating a PR to discuss what can be done and evaluate if the Config is really and how to extend it.

Hope it helps you

AleCo3lho avatar Apr 11 '24 23:04 AleCo3lho

Dear @AleCo3lho After I associated the IAM Role with ServiceAccount, I deployed the external-dns by using helm install immediately. The issue is the pod external-dns cannot be mounted on the eks-pod-identity-token, so it cannot do some actions to AWS Service Route53. I kill the pod and the new pod starts can be mounted. I think there is a time delay after associating ServiceAccount with the IAM Role, or maybe something else. I have no idea. So could you please help me out with this scenario? I do not know if your issue is the same as mine or not.

laiminhtrung1997 avatar Apr 12 '24 10:04 laiminhtrung1997

Dear @AleCo3lho After I associated the IAM Role with ServiceAccount, I deployed the external-dns by using helm install immediately. The issue is the pod external-dns cannot be mounted on the eks-pod-identity-token, so it cannot do some actions to AWS Service Route53. I kill the pod and the new pod starts can be mounted. I think there is a time delay after associating ServiceAccount with the IAM Role, or maybe something else. I have no idea. So could you please help me out with this scenario? I do not know if your issue is the same as mine or not.

Hey man, I am not sure if I understand the problem you are having, for me looks like you are using IRSA right?

AleCo3lho avatar Apr 12 '24 17:04 AleCo3lho

Dear @AleCo3lho I followed these docs to use EKS Pod Identity. Could you spend some time reading it? It replaces the IRSA. https://aws.amazon.com/blogs/containers/amazon-eks-pod-identity-a-new-way-for-applications-on-eks-to-obtain-iam-credentials/ https://docs.aws.amazon.com/eks/latest/userguide/pod-id-how-it-works.html#pod-id-agent-pod

When I start the Pod with the ServiceAccount that is associated with the IAM Role, the Pod does not have the volume eks-pod-identity-token. It does when I restart the Pod.

laiminhtrung1997 avatar Apr 13 '24 03:04 laiminhtrung1997