panic'ing on new node pool names for show
I have a k3s cluster that had a large node and I later added a node-pool with the ability to name. There are issues when doing the show command its panic'ing it looks like when it iterates over the named pool
#civo version
civo version
Civo CLI v1.0.66
#cluster nodes, used to have more in the mids node-pool but scaled it down to 1, and now hoping to delete it
k get nodes
NAME STATUS ROLES AGE VERSION
k3s-test01-c25a-b952af-node-pool-50a8-xy15b Ready,SchedulingDisabled <none> 2d13h v1.27.1+k3s1
k3s-test01-c25a-b952af-node-pool-7807-iy4yw Ready <none> 11d v1.27.1+k3s1
#node-pools for the cluster, was preparing to remove the medium nodes.
civo k8s node-pool ls test01
Node Pool 9897d03f-4e5c-4a8d-8107-61e12d318948:
+--------------------------------------+----------------+-------+--------+--------+
| Name | Size | Count | Labels | Taints |
+--------------------------------------+----------------+-------+--------+--------+
| 9897d03f-4e5c-4a8d-8107-61e12d318948 | g4s.kube.large | 1 | null | null |
+--------------------------------------+----------------+-------+--------+--------+
Node Pool mids:
+------+-----------------+-------+--------+--------+
| Name | Size | Count | Labels | Taints |
+------+-----------------+-------+--------+--------+
| mids | g4s.kube.medium | 1 | {} | [] |
+------+-----------------+-------+--------+--------+
| mids | g4s.kube.medium | 1 | {} | [] |
+------+-----------------+-------+--------+--------+
date && civo k8s show test01
Thu Oct 5 12:17:47 PM EDT 2023
ID : 8bde8967-b352-4dfd-88dd-294e7f4a8835
Name : test01
ClusterType : k3s
Region : NYC1
Nodes : 2
Size : g4s.kube.large
Status : ACTIVE
Firewall : k3s-cluster-test01-391d-b952af
Version : 1.27.1-k3s1 *
API Endpoint : https://212.2.240.49:6443
External IP : 212.2.240.49
DNS A record : 8bde8967-b352-4dfd-88dd-294e7f4a8835.k8s.civo.com
Installed Applications : civo-cluster-autoscaler, metrics-server, Traefik-v2-nodeport
* An upgrade to v1.28.2-k3s1 is available. Learn more about how to upgrade: civo k3s upgrade --help
Conditions:
+---------------------------------------+--------+
| Message | Status |
+---------------------------------------+--------+
| Control Plane is accessible | True |
+---------------------------------------+--------+
| Worker nodes from all pools are ready | True |
+---------------------------------------+--------+
| Cluster is on desired version | True |
+---------------------------------------+--------+
Pool (9897d0):
+---------------------------------------------+--------------+--------+----------------+-----------+----------+---------------+
| Name | IP | Status | Size | Cpu Cores | RAM (MB) | SSD disk (GB) |
+---------------------------------------------+--------------+--------+----------------+-----------+----------+---------------+
| k3s-test01-c25a-b952af-node-pool-7807-iy4yw | 212.2.240.49 | ACTIVE | g4s.kube.large | 4 | 8192 | 60 |
+---------------------------------------------+--------------+--------+----------------+-----------+----------+---------------+
Labels:
kubernetes.civo.com/node-pool=9897d03f-4e5c-4a8d-8107-61e12d318948
kubernetes.civo.com/node-size=g4s.kube.large
panic: runtime error: slice bounds out of range [:6] with length 4
goroutine 1 [running]:
github.com/civo/cli/cmd/kubernetes.glob..func27(0x1af5ae0?, {0xc000364180, 0x1, 0x1?})
/home/runner/work/cli/cli/cmd/kubernetes/kubernetes_show.go:187 +0x61e8
github.com/spf13/cobra.(*Command).execute(0x1af5ae0, {0xc000364160, 0x1, 0x1})
/home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:854 +0x663
github.com/spf13/cobra.(*Command).ExecuteC(0x1ae6960)
/home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:958 +0x39c
github.com/spf13/cobra.(*Command).Execute(...)
/home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:895
github.com/civo/cli/cmd.Execute()
/home/runner/work/cli/cli/cmd/root.go:121 +0x25
main.main()
/home/runner/work/cli/cli/main.go:27 +0x17
its also kind of odd that in the mids pool it lists two when kubectl, civo-ui, civo-cli only has 2 nodes total (1 other large one). I speculate thats an artifact leftover from prior nodes in that pool being there before I scaled them down maybe?
civo k8s node-pool ls test01 && k get nodes
Node Pool 9897d03f-4e5c-4a8d-8107-61e12d318948:
+--------------------------------------+----------------+-------+--------+--------+
| Name | Size | Count | Labels | Taints |
+--------------------------------------+----------------+-------+--------+--------+
| 9897d03f-4e5c-4a8d-8107-61e12d318948 | g4s.kube.large | 1 | null | null |
+--------------------------------------+----------------+-------+--------+--------+
Node Pool mids:
+------+-----------------+-------+--------+--------+
| Name | Size | Count | Labels | Taints |
+------+-----------------+-------+--------+--------+
| mids | g4s.kube.medium | 1 | {} | [] |
+------+-----------------+-------+--------+--------+
| mids | g4s.kube.medium | 1 | {} | [] |
+------+-----------------+-------+--------+--------+
NAME STATUS ROLES AGE VERSION
k3s-test01-c25a-b952af-node-pool-50a8-xy15b Ready,SchedulingDisabled <none> 2d13h v1.27.1+k3s1
k3s-test01-c25a-b952af-node-pool-7807-iy4yw Ready <none> 11d v1.27.1+k3s1
trying to remove the node-pool it seems like an issue with the short name
civo k8s node-pool delete test01 mids
Please check if you are using the latest version of CLI and retry the command
If you are still facing issues, please report it on our community slack or open a GitHub issue (https://github.com/civo/cli/issues)
Error: Please provide the node pool ID with at least 6 characters for mids
seems like 6 character min was needed but somehow I created a 4 character named one at some point.
civo k8s node-pool create test01 --name tv -s g4s.kube.xsmall
panic: runtime error: slice bounds out of range [:6] with length 2
goroutine 1 [running]:
github.com/civo/cli/cmd/kubernetes.glob..func15(0x1af45e0?, {0xc0007965f0, 0x1, 0x5?})
/home/runner/work/cli/cli/cmd/kubernetes/kubernetes_nodepool_create.go:78 +0x7d0
github.com/spf13/cobra.(*Command).execute(0x1af45e0, {0xc0007965a0, 0x5, 0x5})
/home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:854 +0x663
github.com/spf13/cobra.(*Command).ExecuteC(0x1ae6960)
/home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:958 +0x39c
github.com/spf13/cobra.(*Command).Execute(...)
/home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:895
github.com/civo/cli/cmd.Execute()
/home/runner/work/cli/cli/cmd/root.go:121 +0x25
main.main()
/home/runner/work/cli/cli/main.go:27 +0x17
ok actually the node does get created anyway even with a length of one character node-pool
civo k8s node-pool create test01 --name x -s g4s.kube.xsmall -n 1
panic: runtime error: slice bounds out of range [:6] with length 1
goroutine 1 [running]:
github.com/civo/cli/cmd/kubernetes.glob..func15(0x1af45e0?, {0xc000278a10, 0x1, 0x7?})
/home/runner/work/cli/cli/cmd/kubernetes/kubernetes_nodepool_create.go:78 +0x7d0
github.com/spf13/cobra.(*Command).execute(0x1af45e0, {0xc0002789a0, 0x7, 0x7})
/home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:854 +0x663
github.com/spf13/cobra.(*Command).ExecuteC(0x1ae6960)
/home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:958 +0x39c
github.com/spf13/cobra.(*Command).Execute(...)
/home/runner/go/pkg/mod/github.com/spf13/[email protected]/command.go:895
github.com/civo/cli/cmd.Execute()
/home/runner/work/cli/cli/cmd/root.go:121 +0x25
main.main()
/home/runner/work/cli/cli/main.go:27 +0x17
hey, this PR should have closed this issue. Feel free to re-open if you encounter the issue again