In case you don’t know Multi-Gen LRU is an alternative LRU implementation that optimizes page reclaim and improves performance under memory pressure. Page reclaim decides the kernel’s caching policy and ability to overcommit memory. It directly impacts the kswapd CPU usage and RAM efficiency.
Has anyone enabled this feature on their machines? Have you noticed any performance gains or memory management improvements? It’s developed by Google and is reportedly is being used in ChromeOS and Android.
I use this on all of my machines now, it does a much better job of preventing swap thrashing than the original LRU code. I’ve been running the MGLRU patch set for like two years (?) without any downside that I’ve been able to detect.
Edit: We switched to using the MGLRU patch set for all of our Arch kernel packages in the asus-linux community back when the patch set was in development, there have been zero problem reports over an installed base of at least a couple hundred people. As far as I can tell there really isn’t a downside to using it.
How do I enable it? I’ve been having huge issues with reclaiming swap and memory usage in general. Do I have to compile the kernel myself and if I do, where is the option located?
Check your running kernel’s config at
/proc/config
(or config.gz) to see if it’s enabled.https://docs.kernel.org/admin-guide/mm/multigen_lru.html has admin/usage details.
If your current kernel doesn’t build multi-gen LRU then you’ll need to build a kernel package that does, or switch to one that enables it.
Ah, then where in the menuconfig is this option located, if I need to build my own kernel…
Use the search function and look for
LRU_GEN
. Trymake nconfig
while you’re at it, it’s the new terminal config menu system.man, make nconfig is such a life savior when configuring a new kernel, it’s great
I don’t even use swap anymore. 32gb of ram ought to be enough.
I know the feeling. I have a “desktop” that has 640GB of memory. Now I say “desktop” because while it IS desktop I mainly use it for a nested virtualization lab.
Of course creating a 500gb RAM disk for some ungodly fast file manipulation is not something I’ve ever thought about or done. /s.
In case you’re wondering it’s an Intel MacPro that just happened to be compatible with the memory in a retired production blade … so yay!
I jammed 64GB into my work laptop sort of by accident (I thought the original kit I ordered was 2x16GB but it was 1x32 so why not keep going) and I have no regrets. 20GB tmpfs for builds? Why not?
I still keep some swap around on the off chance something eats up a shit ton of memory. Dealing with the OOM killer is always a bad time.
If you don’t want to use disk swap there’s always zram, it’ll consume like ~4GB RAM for ~12GB active swap if you use zstd with it. It won’t allocate any (meaningful amount of) compressed memory if swap isn’t active so there’s not much of a downside here.
deleted by creator
Wait until 32GB RAM isn’t enough and the OOM decides to randomly kill processes. Especially with memory leaks…
I especially love disabling swap on windows, sometimes it gets weird