fanlix
fanlix
I use harvesterhci for vm hosting. [Some heavy load vm get cgroup-oom-killed many many times. ](https://github.com/harvester/harvester/issues/2419) by checking the cgroup file `/sys/fs/cgroup/memory/kubepods/burstable/pod12914c4e-282a-4f69-bd3b-f75cba6e715e/memory.limit_in_bytes`, i found * 64G vmi , cgroup mem...
**Describe the bug** A vm with heavy memory activity is killed by harvester.suse.OOM * vm setup ram= 48G. * 100G ram avaliable before vm start * ram cost of program...
**Is your feature request related to a problem? Please describe.** my harvester cluster crash down. so I reinstall a new harvester, import all vm backups (nfs), restore them, but fail....
## Vm running for a month * volume setup 50G * df shows 30% usage * actual cost 105G physical disk space. (200% setup size. 700% data size)  ###...
## problem k8s has default limit: 110 pods per node. My harvester control node with 20 vm already used 100 pods now. #### pod numbers by namespaces: * cattle 13...
stat: add mem stat for jemalloc/os ### 修改内容 - 1, add opts arg for mallctl() - 2, collect jemalloc.mem info - 3, collect os.mem info by /proc - 4, add...