Failure to render large files.
I can't get codevis to render the Linux kernel, on machines with only 16GB of memory. This is unexpected since it was thought that mmap would allow working on images larger than could be stored in memory.
When attempting to render the Linux kernel:
On my 16GB Arch linux system with no allocated swap, I get
Error: Cannot allocate memory (os error 12)
On my 16GB Manjaro Linux system with 17GB of swap, I get
Killed
Sometimes Killed pops up before the image is done rendering, sometimes it happens after the render has finished and while the program is attempting to save the image to disk.
The second system also definitely has it's swap memory filled by codevis. 12.9GB of it.
https://github.com/sloganking/codevis/issues/19#issuecomment-1294498993
I recommend running with the flags that increase verbosity, then you can see where it struggles
@Byron Which flags are you talking about?
As an anonymous mmap is supposed to be transparently backed by a file on disk, and I'd think that's independent on the system's swap size as well. It's benefit is that it will be removed automatically when the process terminates if it ever existed.
Linux has a directory for temporary storage. But it puts a limit on the size of files that can be there. I'm not sure if mmap is attempting to use that location and running out of space, so it attempts to use system swap memory instead, but I can't get rendering to start on my system that has no swap memory. And I can't get it to finish/save on my system that has swap memory, even though that one does start rendering.
What OS are you successfully rendering Linux on? Does that system have 16GB of memory as well?
https://github.com/sloganking/codevis/pull/11#issuecomment-1237559206
Saving the image as png also takes a couple of additional (real) memory gigabytes as it compresses the image buffer into memory first, while other formats can flush the buffer to disk more directly (but don't support these dimensions of 86k*48k :D).
Could needing to store the compressed image in memory, before saving to disk, be causing issues when one of my systems is failing while saving the image?
Sometimes Killed pops up before the image is done rendering, sometimes it happens after the render has finished and while the program is attempting to save the image to disk.
In the latter case, there will be another big allocation to keep the compressed PNG image in memory, usually 3GB when handling the linux kernel image. It's interesting to see of MacOS just seems to do exactly what I expect in that moment. Also note that it has transparent LZ4 compression of virtual memory, so probably that helps. The swap on MacOS seems to be super dynamic as well and by default it doesn't even have a swap file anymore.
It's worth noting that the system was impaired while the Vec was used, so using a mmap enabled this kind of memory magic in the first place.
What OS are you successfully rendering Linux on? Does that system have 16GB of memory as well?
It's MacOS with 16GB of RAM, and I can render Webkit as well without trouble (it's even larger), and while the IDE is open which consumes a whopping 6GB of RAM due to the cough JVM.
Could needing to store the compressed image in memory, before saving to disk, be causing issues when one of my systems is failing while saving the image?
It's not unlikely, after all that memory is just a plain Vec which stresses the already stretched system. I tried all image formats that are uncompressed, but couldn't find one that supports these dimensions.
Maybe a workaround could be to resize the small images that are generated by the threads, and put them onto a (much) smaller final image. These 'small' images are sent with a Vec<u8> buffer already, so can be resized easily with DynamicImage. This should significantly bring down memory consumption. I'd see this as an optional flag though so people can still get the full-resolution image if they want to. The difficulty will be to do all the math to properly insert these scaled images into the final one, it would probably need fractional pixels so pixel-by-pixel copies might not cut it or cause visual artefacts.