Results 22 comments of Leon Wang

> Are u trying to enable maintenance mode or cordon on the last available node? this is not allowed since it is the last node that can be used by...

> This doesn't help. [root@localhost ~]# kubectl uncordon localhost.localdomain node/localhost.localdomain already uncordoned But warning still bumping very fast: W0719 06:11:25.694282 1 dispatcher.go:142] rejected by webhook "validator.harvesterhci.io": &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"",...

> * set the `Spec.Unschedulable = false`. > * remove the `Spec.Taints` with key `kubevirt.io/drain`. > * delete the node annotation with key `harvesterhci.io/maintain-status`. edit node only has kubevirt.io/schedulable: "true"...

I am afraid I am not able to generate 'support bundle' on our system. But I can confirm, this error exist as by default (without any operation on harvester UI)

I tried this fix: ``` --- a/pkg/webhook/resources/node/validator.go +++ b/pkg/webhook/resources/node/validator.go @@ -8,7 +8,7 @@ import ( "k8s.io/apimachinery/pkg/runtime" ctlnode "github.com/harvester/harvester/pkg/controller/master/node" - werror "github.com/harvester/harvester/pkg/webhook/error" + //werror "github.com/harvester/harvester/pkg/webhook/error" "github.com/harvester/harvester/pkg/webhook/types" ) @@ -69,5 +69,6 @@...

``` apiVersion: v1 kind: Node metadata: annotations: csi.volume.kubernetes.io/nodeid: '{"driver.longhorn.io":"localhost.localdomain"}' flannel.alpha.coreos.com/backend-data: '{"VNI":1,"VtepMAC":"8a:82:75:78:14:b4"}' flannel.alpha.coreos.com/backend-type: vxlan flannel.alpha.coreos.com/kube-subnet-manager: "true" flannel.alpha.coreos.com/public-ip: 172.30.242.93 kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock kubevirt.io/heartbeat: "2022-08-10T09:43:29Z" management.cattle.io/pod-limits: '{"cpu":"4700m","memory":"5708Mi"}' management.cattle.io/pod-requests: '{"cpu":"4440m","memory":"3360Mi","pods":"55"}' node.alpha.kubernetes.io/ttl: "0" volumes.kubernetes.io/controller-managed-attach-detach: "true" creationTimestamp:...

Actually, I didn't click any UI button. So I don't have any 'maintenance' related record in ingress-nginx-controller. I just had a single server running harvester. I tried to remove these...

``` func validateCordonAndMaintenanceMode(oldNode, newNode *corev1.Node, nodeList []*corev1.Node) error { // if old node already have "maintain-status" annotation or Unscheduleable=true, // it has already been enabled, so we skip it if...

> 1. set the `Spec.Unschedulable = false`. > 2. remove the `Spec.Taints` with key `kubevirt.io/drain`. > 3. delete the node annotation with key `harvesterhci.io/maintain-status`. Is the node descirbe requires a...

And I noticed there's a lot of warnings in harvester pod log ``` W0825 05:44:08.489494 8 transport.go:260] Unable to cancel request for *client.addQuery time="2022-08-25T05:44:17Z" level=info msg="Event(v1.ObjectReference{Kind:\"Node\", Namespace:\"\", Name:\"localhost.localdomain\", UID:\"localhost.localdomain\", APIVersion:\"\",...