Installing Kaim in K8s in outside AWS - "failed to get EC2 IAM info"
Hi all. I deployed Kiam in external k8s for get access to KMS in AWS in ergo-cd pod. But however, i created roles and via terraform example and append additional VARs through helm i have this problem. I think it is because KIAM should be installed into AWS. Maybe anyone have Idea how i can resolve this issue?
my settings for helm.
argocd app create kiam \
--repo https://uswitch.github.io/kiam-helm-charts/charts/ \
--helm-chart kiam \
--revision 5.7.0 \
--dest-namespace base \
--dest-server https://10.242.20.10:6443 \
--helm-set-string "server.extraEnv[0].name=AWS_ACCESS_KEY_ID" \
--helm-set-string "server.extraEnv[0].value=XxXxXxXxXx" \
--helm-set-string "server.extraEnv[1].name=AWS_SECRET_ACCESS_KEY" \
--helm-set-string "server.extraEnv[1].value=XxXxXxXxXx" \
--helm-set-string "extraHostPathMounts[0].name=ssl-certs" \
--helm-set-string "extraHostPathMounts[0].mountPath=/etc/ssl/certs" \
--helm-set-string "extraHostPathMounts[0].readOnly=true" \
--helm-set-string "extraHostPathMounts[0].hostPath=/etc/pki/ca-trust/extracted/pem" \
-p server.sslCertHostPath=/etc/ssl/certs \
-p agent.tlsSecret=kiam-agent-certificate-secret \
-p agent.tlsCerts.caFileName=ca.crt \
-p agent.tlsCerts.certFileName=tls.crt \
-p agent.tlsCerts.keyFileName=tls.key \
-p server.assumeRoleArn=arn:aws:iam::0XXXXXXX0:role/kiam-server \
-p server.tlsSecret=kiam-server-certificate-secret \
-p server.tlsCerts.caFileName=ca.crt \
-p server.tlsCerts.certFileName=tls.crt \
-p server.tlsCerts.keyFileName=tls.key
Full error log is
{"level":"info","msg":"starting server","time":"2020-03-31T16:35:15Z"}
{"level":"info","msg":"started prometheus metric listener 0.0.0.0:9620","time":"2020-03-31T16:35:15Z"}
{"level":"info","msg":"detecting arn prefix","time":"2020-03-31T16:35:15Z"}
{"level":"fatal","msg":"error creating listener: error detecting arn prefix: error accessing iam info: EC2MetadataRequestError: failed to get EC2 IAM info\nc
aused by: EC2MetadataError: failed to make EC2Metadata request\ncaused by: 404 Not Found\n\nThe resource could not be found.\n\n ","time":"2020-03-31T16:35
:20Z"}
Kiam will try to autodetect the arn prefix for your roles by default, it does this via the ec2 metadata api.
You can get around this by passing in the server.roleBaseArn argument.
Kiam will try to autodetect the arn prefix for your roles by default, it does this via the ec2 metadata api. You can get around this by passing in the
server.roleBaseArnargument.
like as amazon.com?
Kiam will try to autodetect the arn prefix for your roles by default, it does this via the ec2 metadata api. You can get around this by passing in the
server.roleBaseArnargument.
i add
-p server.roleBaseArn=arn:aws:iam::481746587383:role/
but i have nest problem....
https://github.com/uswitch/kiam/issues/401
I seem to remember we also have a health check when it starts that it can communicate with the metadata API. If it can't, the process will exit.
Having said that, I suspect if people are happy to use explicit naming there's no reason it couldn't work outside of AWS but would need some flags to control this behaviour (and thus prevent using autodetected roles). It's not huge but not trivial and unlikely something we'd choose to do (but as ever, open to extensions from people!)
I am using kube2iam in aws because its simplicty, but I need to setup the same infra in bare metal hardware. Looking to run kiam/kube2iam outside EC2 instances, there is any way official way to get it working?