colima
colima copied to clipboard
issue with kubectl
Description
unable to execute the kubectl command
I'm getting the below exception. when I'm executing the kubectl command in local E1215 22:14:12.237942 41787 memcache.go:238] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused E1215 22:14:12.238652 41787 memcache.go:238] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused E1215 22:14:12.239633 41787 memcache.go:238] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused E1215 22:14:12.240716 41787 memcache.go:238] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused E1215 22:14:12.241875 41787 memcache.go:238] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused The connection to the server localhost:8080 was refused - did you specify the right host or port?
Version
Colima Version:colima version 0.4.6 runtime: docker arch: aarch64 client: v20.10.21 server: v20.10.18 Lima Version: 0.14.1 Qemu Version: it didn't get installed with the Colima
Operating System
- [ ] macOS Intel
- [X] macOS M1
- [ ] Linux
Reproduction Steps
1.kubectl get pod 2. 3.
Expected behaviour
Pod's should be appear
Additional context
E1215 22:14:12.237942 41787 memcache.go:238] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused E1215 22:14:12.238652 41787 memcache.go:238] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused E1215 22:14:12.239633 41787 memcache.go:238] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused E1215 22:14:12.240716 41787 memcache.go:238] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused E1215 22:14:12.241875 41787 memcache.go:238] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused The connection to the server localhost:8080 was refused - did you specify the right host or port?
I have fixed the above issue by downloading the colima kubectl. now I'm facing a different issue. Can anyone of you please look into this and help me?
I'm connecting to the name space which is installed in our dev server which can be accessible over the vpn.
I have connected to vpn but still I'm getting No resources found in oracle-demo namespace.
Do I need to change any netowork confirguration? if yes, please suggest
Stack-trace:
kubectl get pod -n oracle-demo
E1215 22:57:36.886435 45011 memcache.go:255] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E1215 22:57:36.940789 45011 memcache.go:106] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E1215 22:57:36.943363 45011 memcache.go:106] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E1215 22:57:36.946445 45011 memcache.go:106] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
No resources found in oracle-demo namespace.
I see the same output on my M1
$ colima version colima version 0.5.0 git commit: 5a94ab4f098ec0fe94e6d0df8b411fb149fe26fe
runtime: docker arch: aarch64 client: v20.10.21 server: v20.10.20
kubernetes Client Version: v1.26.0 Kustomize Version: v4.5.7 Server Version: v1.25.4+k3s1
$ k get pod E1216 22:48:41.768849 50108 memcache.go:255] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1216 22:48:41.770089 50108 memcache.go:106] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1216 22:48:41.771748 50108 memcache.go:106] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request E1216 22:48:41.773249 50108 memcache.go:106] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request No resources found in default namespace.
I have not been able to reproduce this.
What macOS version are you running?
Does it report same error if you use another profile? colima start test
I have not been able to reproduce this.
What macOS version are you running? Does it report same error if you use another profile?
colima start test
MacOS version: 13.0.1 (22A400)
$ colima start --cpu 2 --memory 2 --disk 10 test INFO[0000] starting colima [profile=test] INFO[0000] runtime: docker INFO[0000] starting ... context=vm INFO[0013] provisioning ... context=docker INFO[0013] starting ... context=docker INFO[0019] done
$ k get pod E1216 23:45:59.864733 51276 memcache.go:238] couldn't get current server API group list: Get "https://127.0.0.1:6443/api?timeout=32s": dial tcp 127.0.0.1:6443: connect: connection refused E1216 23:45:59.865117 51276 memcache.go:238] couldn't get current server API group list: Get "https://127.0.0.1:6443/api?timeout=32s": dial tcp 127.0.0.1:6443: connect: connection refused E1216 23:45:59.866162 51276 memcache.go:238] couldn't get current server API group list: Get "https://127.0.0.1:6443/api?timeout=32s": dial tcp 127.0.0.1:6443: connect: connection refused The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
@iankettler according to your logs, you didn't start Kubernetes. That error you are getting is from a separate Kubernetes context.
You should specify the --kubernetes
flag and it should work.
colima start --kubernetes test
If you still got an error, are you able to access Kubernetes from the VM?
colima ssh -p test # assuming you're using the test profile
kubectl get pods
You may want to list all your Kubernetes contexts to ascertain you are accessing the right one.
kubectl config get-contexts
It seems to be working now. Do you always need a profile (k8s namespace) to start from? Previously I started without any profile.
$ colima start --cpu 2 --memory 2 --disk 10 --kubernetes test
INFO[0000] starting colima [profile=test]
INFO[0000] runtime: docker+k3s
INFO[0000] starting ... context=vm
INFO[0014] provisioning ... context=docker
INFO[0014] starting ... context=docker
INFO[0019] provisioning ... context=kubernetes
INFO[0019] downloading and installing ... context=kubernetes
INFO[0024] loading oci images ... context=kubernetes
INFO[0028] starting ... context=kubernetes
INFO[0032] updating config ... context=kubernetes
INFO[0032] Switched to context "colima-test". context=kubernetes
$ k get pod
No resources found in default namespace.
$ k get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 2m21s
$ colima ssh -p test
colima-test:/Users/ian/Projects/app/infra/k8s$ kubectl get pods
No resources found in default namespace.
colima-test:/Users/ian/Projects/app/infra/k8s$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* default default default
Do you always need a profile (k8s namespace) to start from? Previously I started without any profile.
Not at all, I was just trying to be sure if the default profile is somehow broken.
Cause of Error
- This problem can arise when you haven’t set the kubeconfig environment variable.
export KUBECONFIG=/etc/kubernetes/admin.conf or $HOME/.kube/config
- .kube/config file is not exported to User $HOME directory
Fix the Error – The connection to the server localhost:8080 was refused
- Check if the kubeconfig environment variable is exported if not exported
export KUBECONFIG=/etc/kubernetes/admin.conf or $HOME/.kube/config
- Check your .kube or config in the home directory file. If you did not found it, then you need to move that to the home directory. using the following command
cp /etc/kubernetes/admin.conf $HOME/
chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf
Whenever you are starting Master Node you may require to set the environment variable. It can be set permanently using the following command.
echo 'export KUBECONFIG=$HOME/admin.conf' >> $HOME/.bashrc
For others that may come across this: I had a similar error messages and unsetting proxy env vars (in VM) did the trick - it looked like my corporate proxy was getting in the way. Oddly enough, I wasn't unable to reproduce it again after deleting (colima delete
) and re-creating the instance, even with proxy env vars configured again.
$ k logs -n kube-system metrics-server-5c8978b444-xj22k
E1221 00:46:50.891960 61969 memcache.go:255] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E1221 00:46:50.898546 61969 memcache.go:106] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E1221 00:46:50.900605 61969 memcache.go:106] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
Error from server: Get "https://192.168.5.15:10250/containerLogs/kube-system/metrics-server-5c8978b444-xj22k/metrics-server": proxyconnect tcp: proxy error from 127.0.0.1:6443 while dialing <REDACTED PROXY ADDRESS>, code 503: 503 Service Unavailable
If this can help, try sudo swapoff -a then retry kubectl. If it works, make sure you disable the swap in fstab.
#/swap.img none swap sw 0 0
export KUBECONFIG=/etc/kubernetes/admin.conf swapoff -a mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
same issue here, seems matrics-server failed to start
❯ colima start --kubernetes test
INFO[0000] starting colima [profile=test]
INFO[0000] runtime: docker+k3s
INFO[0000] preparing network ... context=vm
INFO[0000] creating and starting ... context=vm
INFO[0021] provisioning ... context=docker
INFO[0021] starting ... context=docker
INFO[0026] provisioning ... context=kubernetes
INFO[0026] downloading and installing ... context=kubernetes
INFO[0031] loading oci images ... context=kubernetes
INFO[0035] starting ... context=kubernetes
INFO[0038] updating config ... context=kubernetes
INFO[0039] Switched to context "colima-test". context=kubernetes
INFO[0039] done
then inside the vm:
❯ colima ssh -p test
colima-test:/Users/archcst$ kubectl get po -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system local-path-provisioner-79f67d76f8-9c2nl 1/1 Running 0 44s
kube-system coredns-597584b69b-d89f2 1/1 Running 0 44s
kube-system metrics-server-5c8978b444-msrjd 1/1 Running 0 44s
then describe metrics-server:
colima-test:/Users/archcst$ kubectl describe pod metrics-server-5c8978b444-msrjd -n kube-system
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 64s default-scheduler Successfully assigned kube-system/metrics-server-5c8978b444-msrjd to colima-test
Normal Pulled 60s kubelet Container image "rancher/mirrored-metrics-server:v0.6.1" already present on machine
Normal Created 59s kubelet Created container metrics-server
Normal Started 59s kubelet Started container metrics-server
Warning Unhealthy 44s (x11 over 59s) kubelet Readiness probe failed: HTTP probe failed with statuscode: 500
@ArchCST do you get same experience with containerd runtime?
colima start --runtime containerd --kubernetes
@ArchCST do you get same experience with containerd runtime?
colima start --runtime containerd --kubernetes
yes it's the same,
❯ colima start test2 --runtime containerd --kubernetes
INFO[0000] starting colima [profile=test2]
INFO[0000] runtime: containerd+k3s
INFO[0000] preparing network ... context=vm
INFO[0000] creating and starting ... context=vm
INFO[0021] provisioning ... context=containerd
INFO[0021] starting ... context=containerd
INFO[0026] provisioning ... context=kubernetes
INFO[0026] downloading and installing ... context=kubernetes
INFO[0031] loading oci images ... context=kubernetes
INFO[0036] starting ... context=kubernetes
INFO[0039] updating config ... context=kubernetes
INFO[0039] Switched to context "colima-test2". context=kubernetes
INFO[0039] done
then in the vm:
❯ colima ssh -p test2
$ kubectl describe pod metrics -n kube-system
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 49s default-scheduler Successfully assigned kube-system/metrics-server-5c8978b444-k8nlf to colima-test2
Normal Pulled 47s kubelet Container image "rancher/mirrored-metrics-server:v0.6.1" already present on machine
Normal Created 47s kubelet Created container metrics-server
Normal Started 47s kubelet Started container metrics-server
Warning Unhealthy 33s (x10 over 46s) kubelet Readiness probe failed: HTTP probe failed with statuscode: 500
In my case, I got these errors while trying to connect to a Private Cluster.
I am yet to figure it out, but this is the message I got while trying to check the workloads on AKS.
Private clusters require that the browser is running on a machine that has access to the AKS cluster's Azure Virtual Network.
To remediate the issue in my own case, I need to use a VM in the same VNET as the Private Cluster to access the cluster.
There are other options listed here: https://learn.microsoft.com/en-us/azure/aks/private-clusters#options-for-connecting-to-the-private-cluster
Cause of Error
1. This problem can arise when you haven’t set the kubeconfig environment variable.
export KUBECONFIG=/etc/kubernetes/admin.conf or $HOME/.kube/config
2. .kube/config file is not exported to User $HOME directory
Fix the Error – The connection to the server localhost:8080 was refused
1. Check if the kubeconfig environment variable is exported if not exported
export KUBECONFIG=/etc/kubernetes/admin.conf or $HOME/.kube/config
2. Check your .kube or config in the home directory file. If you did not found it, then you need to move that to the home directory. using the following command
cp /etc/kubernetes/admin.conf $HOME/
chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf
Whenever you are starting Master Node you may require to set the environment variable. It can be set permanently using the following command.
echo 'export KUBECONFIG=$HOME/admin.conf' >> $HOME/.bashrc
For me there was no file with the name admin.conf inside /etc/kubernetes/ directory. Instead, the same file with the name kubelet.conf was generated for some reasons that I too not aware of. So, I had to rename the file to admin.conf and then follow the rest of the steps.