error: the server could not find the requested resource
Running kubectl dds on my cluster and getting the above error. Verbose mode doesn't tell me which resource
$ kubectl dds -v
error: the server could not find the requested resource
I have even deleted the pods from failing jobs, but I still get the error
Running on kube v1.20. Version info below
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"20+", GitVersion:"v1.20.4-eks-6b7464", GitCommit:"6b746440c04cb81db4426842b4ae65c3f7035e53", GitTreeState:"clean", BuildDate:"2021-03-19T19:35:50Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"20+", GitVersion:"v1.20.15-eks-6d3986b", GitCommit:"d4be14f563712c4e1964fe8a4171ca353b6e7e1a", GitTreeState:"clean", BuildDate:"2022-07-20T22:04:24Z", GoVersion:"go1.15.15", Compiler:"gc", Platform:"linux/amd64"}
Interesting, do you have a sample of how your cluster was set up (terraform) or workloads in the cluster? I'm also curious what access you have in the cluster. maybe kubectl whoami or use kubectl auth can-i get pods -A https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-em-can-i-em-
$ kubectl auth can-i get pods -A
yes
I am able to run kubectl dds for other namespaces, just not our default namespace that has our main workloads
Let me see if I can produce a sanitized version of our terraform or helm charts
Very interesting. I assume the error is actually coming from the kubernetes client go. I can add some additional debugging next week to see if we can't find where it's erroring
Woops, didn't mean to close. I have a build with verbose logging. I'll test and release a new version next week which will allow more verbose logging which should let us see what's wrong.
Thank you, looking forward to the build
I've been waiting for krew to update the plugin version in the index https://github.com/kubernetes-sigs/krew-index/pull/2721#issuecomment-1302970636
I know the maintainers are pretty busy so I don't want to bother them with the request. If you want to test manually you can download the new 0.2.3 release binary from the repo and put it in your $PATH Then you should be able to add extra verbose logging -v 9 and see all the calls as they're happening so we can figure out why it's failing.
Hi, I tried the latest binary and the kubectl dds now completes without erroring. We also upgraded our cluster and client binary to 1.21, so I'm not sure if that helped too
Glad it's working for you :raised_hands: what version were you on before? Maybe I can test with that and see if there's a compatibility problem.
We were at 1.20 previously