autoscaling
autoscaling copied to clipboard
Failed scheduling
Observed this when vm failed to start as Pod was assigned to node but kubelet prevents it to start
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 43m autoscale-scheduler 0/3 nodes are available: 1 Insufficient neonvm/kvm, 3 Insufficient cpu.
Warning FailedScheduling 43m autoscale-scheduler 0/3 nodes are available: 1 Insufficient neonvm/kvm, 3 Insufficient cpu.
Normal Scheduled 43m autoscale-scheduler Successfully assigned default/vm-stress-0050-dz557 to i-08d5967318f466dd4.eu-west-1.compute.internal
Warning OutOfcpu 43m kubelet Node didn't have enough resource: cpu, requested: 1000, used: 94939, capacity: 95690
As result VM stuck in failed state.
vm
% k get neonvm vm-stress-0050
NAME CPUS MEMORY POD EXTRAIP STATUS AGE
vm-stress-0050 vm-stress-0050-dz557 10.100.128.49 Failed 48m
pod
% k get po vm-stress-0050-dz557
NAME READY STATUS RESTARTS AGE
vm-stress-0050-dz557 0/1 OutOfcpu 0 48m
pod status
% k get po vm-stress-0050-dz557 -ojson| jq '.status'
{
"message": "Pod Node didn't have enough resource: cpu, requested: 1000, used: 94939, capacity: 95690",
"phase": "Failed",
"reason": "OutOfcpu",
"startTime": "2023-07-04T11:19:28Z"
}