vere icon indicating copy to clipboard operation
vere copied to clipboard

vere uses too much memory

Open joemfb opened this issue 1 year ago • 1 comments

The current release (v2.6) and prior maintain the entire persistent state in memory. As of #402, clean, persistent pages of the home-road heap (north.bin) are in a file-backed mapping, reducing the resident set and improving performance under memory pressure (no need to swap those pages). When the snapshot is updated, the full home-read heap is remapped to north.bin, and all ephemeral memory (contiguous free space within the loom) is discarded with MADV_DONTNEED. But much more can be done.


To reduce ephemeral memory usage:

  • unused pages can be returned to the host OS more aggressively

MADV_FREE or MADV_DONTNEED could be issued as often as after every event.

  • consume memory more incrementally

unused pages are currently initialized to dirty, read-write state, which reduces page faults. an option to initialize to a clean, read-only state would reduce pre-commitment overhead (RAM + swap), allowing larger loom sizes by default.

  • roll-your-own swap

for environment were configuring/managing swap space is impossible or honerous (apparently including kubernetes), the ephemeral portions of the loom could be placed in an ephemeral file-backed mapping (and never sync'd).

  • improve tooling efficiency

mass, pack, and meld each make the entire persistent state resident. additionally, meld uses absurd amounts of off-loom heap.

  • adapt the ares PMA design to vere

long term, this is the most efficient/promising design for a single-level store


To reduce persistent memory usage

  • reclaim from cold jet state

unreferenced registrations could be deleted (after a full |meld, to ensure that refcounts are semantically precise).

  • compress cells

a special representation for small allocations would allow us to reduce cells from 24 to 16 bytes. (aligning on 16 bytes would then allow doubling the loom maximum to 16GB)

see urbit/urbit#6599

joemfb avatar May 22 '23 15:05 joemfb

It is useful to be able to put a (mostly) hard limit on the amount of memory that a ship can use. Demand paging is relatively good at increasing the number of ships that can be run in a given amount of memory, but relatively bad at reducing the amount of memory that a ship needs, at least when interpreted naively. If we roll our own swap, then we can say something like: as long as you have a few tens of MB for vere's non-loom state, you can restrict memory as much as you want, and it will only impact performance (and ability to meld).

To roll our own swap, we would need to map the ephemeral memory to a file, and I suppose also any time a clean page is dirtied, that would also get mapped into that file? And yeah, never sync it. Is that as easy as:

  • creating such a file
  • changing the second mmap in _ce_loom_mapf_north from MAP_ANON | MAP_PRIVATE to MAP_SHARED
  • in _ce_flaw_protect, mmap the faulting page into that ephemeral file (can replace the mprotect call)
  • _ce_loom_mapf_north will already remap the dirty heap pages back into the snapshot on save, so no additional changes needed there

philipcmonk avatar May 22 '23 20:05 philipcmonk