wiki and settings difference on vm.swappiness
Hi, Thank you for your work!
I was reading the wiki and noticed that the explanation on vm.swappiness differ from: /etc/sysctl.d/99-cachyos-settings.conf file:
The sysctl swappiness parameter determines the kernel's preference for pushing anonymous pages or page cache to disk in memory-starved situations. A low value causes the kernel to prefer freeing up open files (page cache), a high value causes the kernel to try to use swap space, and a value of 100 means IO cost is assumed to be equal. vm.swappiness = 100
and wiki:
The system's swap space preference can be adjusted using the vm.swappiness sysctl parameter. The default value is "30", which means that the kernel will avoid swapping processes to disk as much as possible and will instead try to keep as much data as possible in memory. A lower swappiness value generally leads to improved performance but may lead to decreased stability if the system runs out of memory.
Currently the default is 100, not 30 like the wiki mentions. I have always read that vm.swappiness works more like the wiki mentions, and not much like the cachyos file mentions.
Am I missing or misunderstanding something?
THANKS!
HI @alexjp
We had recently some changes because of the swapiness vlaue. @ventureoo made here a well done explaination, why we made this change:
https://github.com/CachyOS/CachyOS-Settings/pull/19
I will update tomorrow the wiki to reflect the recent changes, thank you!
Maybe @ventureoo can explain it a bit more detailed, why this change has been done
awesome, thanks! always learning!!!
So the behaviour changed from 5.8 ! that explains it :)
Hum, since I am a Zram user, I am gonna set it to 200 :)
awesome, thanks! always learning!!!
So the behaviour changed from 5.8 ! that explains it :)
Hum, since I am a Zram user, I am gonna set it to 200 :)
100 is the optimal value for ZRAM. 200 would probably have the unfortunate consequence of disabling page (file) cache flush, which would not be good for performance. In addition, the ZRAM is usually quite small, so it will probably get clogged very quickly in memory low situations.
Unfortunately, it is suspected that vm.swappiness on lru_gen(a.k.a. MGLRU) may not work like it used to be in the previous LRU aging algorithm. Traditionally (and originally), vm.swappiness was supposed to control overall reclaim behavior, how aggressively it chooses to swap out anon pages, instead of dropping file cache. But from my recent study, it seems today in MGLRU, vm.swappiness only controls where to "begin" searching evictable pages in a single LRU generation, but sooner or later, all pages (regardless of anon or file) in the generation have to be evicted before progressing to the next generation. The effect of vm.swappiness is thus limited to smaller temporal windows, and it essentially does not have any effect in long term. You can confirm that by setting vm.swappiness=200 and filling up your RAM. Soon you'll see all the file cache is easily wiped out, and difference in vm.swappiness is like negligible. I suspect this to be a design problem. This happens because MGLRU deals with generations of "folios" each of which holds both anon and file pages in certain temporal sequences, and before making any step forward, it must process all pages within the folio, or otherwise LRU algorithm breaks. I think an ideal solution is a design change to make it hold anon lruvec and file lruvec separately.