Minetest-WorldEdit icon indicating copy to clipboard operation
Minetest-WorldEdit copied to clipboard

Subdivide tasks to prevent Luajit OOM

Open Fixer-007 opened this issue 9 years ago • 3 comments

Sometimes If you are doing something big with worldedit you may end up with luajit OOM, I wonder if it is possible for worldedit to divide big tasks into smaller ones to avoid OOMs... (like dividing big volume into smaller chunks and processing each one - one by one).

Fixer-007 avatar Oct 13 '16 23:10 Fixer-007

Or use nonluajit version instead :/

Fixer-007 avatar Jan 01 '17 16:01 Fixer-007

+1

If we know luajit can only handle say 50000 nodes then a selection of 1000000 could be saved into 20 temporary files and then concatenated into a single file. This fearture would be sooooo nice.

Using a non luajit version would only give you 4x the area. @Fixer-007's original solution would give you an essentially unlimited area save size. Perhaps the only limit is that the concatenated file must be less than 1GB.

9joshua avatar Jun 02 '17 19:06 9joshua

Just seen this while browsing the issues here. I've implemented a //subdivide command in my WorldEditAdditions mod which I have successfully tested on areas of 2000x150x2000 (~600 million nodes).

The documentation for the command can be found here: https://github.com/sbrl/Minetest-WorldEditAdditions/blob/master/Chat-Command-Reference.md#subdivide-size_x-size_y-size_z-cmd_name-args

I'm still tweaking and refining it, but it seems pretty stable. I'm currently chasing down an issue whereby it gets stuck a random percentage of the way through, but it seems that this only occurs in low-resource environments (i.e. on a Raspberry Pi 3B+. So long as you have plenty of RAM it should work fine.

In short, use the command like so:

///subdivide 10 10 10 clearcut

This would run //clearcut in 10x10x10 chunks over the defined region. //subdivide is saferegion aware, so it handles large chunks properly. If your goal is for it to complete in the lowest amount of time possible, I recommend larger chunk sizes. It is also async, so you can execute other commands at the same time (though chunkloading is extremely slow while //subdivide is running.

sbrl avatar Feb 07 '21 02:02 sbrl