eraser
eraser copied to clipboard
[BUG] Should adjust scanner CPU request in example yaml
Version of Eraser
v1.4.0-beta.0
Expected Behavior
I deployed eraser on a standard CAPZ cluster with VM size: Standard B2s (2 vcpus, 4 GiB memory) Eraser Pod should start on all Nodes.
Actual Behavior
The eraser Pod on the control plane cannot start because of OutOfcpu
:
eraser-system eraser-controller-manager-8589f5dd7-wvq4w 1/1 Running 0 37m 192.168.183.131 zhecheng-802-md-0-svlgx-lhwjg <none> <none>
eraser-system eraser-zhecheng-802-control-plane-cfz5s-58fq2 0/3 OutOfcpu 0 37m <none> zhecheng-802-control-plane-cfz5s <none> <none>
eraser-system eraser-zhecheng-802-md-0-svlgx-8n9gw-56mzk 0/3 Completed 0 37m 192.168.108.3 zhecheng-802-md-0-svlgx-8n9gw <none> <none>
eraser-system eraser-zhecheng-802-md-0-svlgx-lhwjg-gfx9t 0/3 Completed 0 37m 192.168.183.132 zhecheng-802-md-0-svlgx-lhwjg <none> <none>
eraser-system eraser-zhecheng-802-md-0-svlgx-vwp9n-68kl2 0/3 Completed 0 37m 192.168.41.68 zhecheng-802-md-0-svlgx-vwp9n <none> <none>
Warning OutOfcpu 20m kubelet Node didn't have enough resource: cpu, requested: 1007, used: 1100, capacity: 2000
The CPU request should be smaller: https://github.com/eraser-dev/eraser/blob/v1.4.0-beta.0/deploy/eraser.yaml#L431
Steps To Reproduce
- Deploy a CAPZ cluster
- Deploy eraser
Are you willing to submit PRs to contribute to this bug fix?
- [ ] Yes, I am willing to implement it.