html5ever icon indicating copy to clipboard operation
html5ever copied to clipboard

Configurable memory limit

Open Kixiron opened this issue 5 years ago • 6 comments

I’ve been running into an issue where large documents (few hundred MiB) being parsed cause massive amounts of memory usage that can slow or even crash (drastically crash, sometimes causing OOM kills) in production and the ability to limit the memory usage of kuchiki would be invaluable. Having a memory limit could also allow the usage of preallocated buffers, which would do wonders for performance as well

Kixiron avatar Jun 13 '20 18:06 Kixiron

Are there any examples you can share? It seems bizarre that a few hundred MiB documents can cause OOM.

While limiting buffers is one solution, it seems like this is probably a pathological case that the parser should be able to handle.

Ygg01 avatar Jun 15 '20 09:06 Ygg01

This (300MiB) is our most problematic example, it and similarly sized ones have OOM'd a 15GB server, but other smaller ones have similarly painful consequences. The code we use to handle html is here, it doesn't seem like it should have any specific issues

Edit: Here's the memory and cpu usage when that file was accessed, the cut is due to the oom killer

Kixiron avatar Jun 15 '20 16:06 Kixiron

Other than downloading it from Firefox Send, it's possible to regenerate that file by running:

curl -O https://static.crates.io/crates/jni-android-sys/jni-android-sys-0.0.4.crate
tar xvzf jni-android-sys-0.0.4.crate
cd jni-android-sys-0.0.4
cargo doc --no-deps --features api-level-28,force-define

The file will be located at:

target/doc/src/jni_android_sys/reference/api-level-28.rs.html

The file is a rendered code highlighting of a 946k lines source code file.

pietroalbini avatar Jun 15 '20 16:06 pietroalbini

As a point of interest, I just ran the jni-android-sys HTML through the kuchiki find_matches example program. htop didn't report the program taking more than 4gb of memory over the course of its execution.

jdm avatar Jun 15 '20 20:06 jdm

We took a closer look at the metrics for the VM and there were four bursts of downloads from S3, so it's likely that four requests happened at the same time. That would explain OOMing if a single parse takes around 4GB of RAM.

An increase in memory usage 10x the size of the file seems a little large to me, is it possible to improve that? We're hoping to start parsing only a single file at a time which should help, but ideally we'd use less memory in the first place.

jyn514 avatar Jun 16 '20 21:06 jyn514

I've filed https://github.com/kuchiki-rs/kuchiki/issues/73 for one quick win for memory usage for your use case.

jdm avatar Jul 01 '20 16:07 jdm