otomi-core
otomi-core copied to clipboard
otomi CLI supports clusters created by DigitalOcean
Is your feature request related to a problem? Please describe. I would like to use otomi CLI with custom clusters, e.g.: digital ocean
Describe the solution you'd like
- [ ] Add doctl to the otomi image
- [ ] mount required files to the container for authentication purpose
Describe alternatives you've considered n/a Additional context Add any other context or screenshots about the feature request here.
otomi:global:error STDERR:
otomi:global:error Error: Kubernetes cluster unreachable: Get "https://2ffb6af3-605f-4ef0-9749-0b2a42c3225d.k8s.ondigitalocean.com/version": getting credentials: exec: executable doctl not found
otomi:global:error It looks like you are trying to use a client-go credential plugin that is not installed.
otomi:global:error To learn more about this feature, consult the documentation available at:
otomi:global:error https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins
otomi:global:error
otomi:global:error COMBINED OUTPUT:
otomi:global:error Error: Kubernetes cluster unreachable: Get "https://2ffb6af3-605f-4ef0-9749-0b2a42c3225d.k8s.ondigitalocean.com/version": getting credentials: exec: executable doctl not found
otomi:global:error It looks like you are trying to use a client-go credential plugin that is not installed.
otomi:global:error To learn more about this feature, consult the documentation available at:
otomi:global:error https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins
otomi:global:error at hfCore (/home/app/stack/dist/src/common/hf.js:42:30)
otomi:global:error exit code: 1
otomi:global:error at ChildProcess.<anonymous> (/home/app/stack/node_modules/zx/dist/bundle.cjs:15261:22)
otomi:global:error at ChildProcess.emit (node:events:526:28)
otomi:global:error at maybeClose (node:internal/child_process:1092:16)
otomi:global:error at Process.ChildProcess._handle.onexit (node:internal/child_process:302:5) +0ms
we will not want to add cloud clis to the core image. However, we can mount the credentials if we know where they are and just fail upon expiry, so the user can retry...
can you please update this to reflect that?
I am afraid it is not that simple as credentials are not stored anywhere. Instead each time the kubectl calls the doctl
command and obtains a token.
Example:
# doctl kubernetes cluster kubeconfig exec-credential --version=v1beta1 --context=default <redacted>
{"kind":"ExecCredential","apiVersion":"client.authentication.k8s.io/v1beta1","spec":{"interactive":false},"status":{"expirationTimestamp":"2022-05-16T07:32:53Z","token":"<redacted>"}}
This is not reproducible. I have been successfully deploying with dockerized otomi cli to DOKS for weeks now.
The fact that you cannot reproduce it does not mean the issue does not exists. I still face this issue. This happens when I deploy from otomi-core dir (development mode)
I always deploy from otomi-core dir. I can demo this.
If you can provide me with a sample KUBECONFIG that clearly shows that it invokes a command for access, that would be great. Because if it is not in there, you must be experiencing something else. If it is in there, then I must be given a superadmin config with a long lasting key.
Ran into this issue when first starting to use otomi-cli, while working with Matthias. The way to circumnavigate this for a bit at least was to use doctl kubernetes cluster kubeconfig save <cluster-id|cluster-name> --expiry-seconds 3600
so that there is no reason to call doctl
.
The work around is provided. Closing
Would be nice to find it in the "known issues" section of the readme.