spec
spec copied to clipboard
wasm64: support memory larger than 16 GB
I asked this question at https://github.com/emscripten-core/emscripten/discussions/23966 and was told it's a limitation imposed by the spec.
In wasm64, the spec caps the maximum memory to 16 GB in https://github.com/WebAssembly/memory64/blob/9003cd5e24e53b84cd9027ea3dd7ae57159a6db1/document/js-api/index.bs#L1785.
Could we please get wasm64 to support as much memory available from the host machine's RAM? I'd also like to understand reasons behind the 16GB limit when technically 64-bit integers can be used to index memories much larger than 16GB.
@sbc100
Note this is a standard limit imposed by the JS API, it's not necessarily the same limit in other embedding contexts. Any limit is subject to future revision, but if your use case is outside of the web, then an engine could provide more memory.
Note this is a standard limit imposed by the JS API
Is this documented somewhere?
Yes, the link you provided was into the JS-API documentation, which is technically separate from the core Wasm specification.
Ah, I see. Is this due to a limitation of JS execution engines like browsers/node? Where do I lobby to increase the JS-API limit?
I believe you have found the correct forum. The JS-API spec is maintained alongside the wasm core spec here: https://github.com/WebAssembly/spec/blob/main/document/js-api/index.bs.
I encountered the same issue as you did, and I discovered that the 16GB memory limit is also documented in the source code of the V8 engine. : src/wasm/wasm-limits.h - v8/v8 - Git at Google Does this imply that increasing the memory limit is difficult unless the V8 engine is updated? Or is it possible for browsers using the V8 engine to allocate more than 16GB of memory now?
This is the right place to ask for the JS-API limit to be increased. SpiderMonkey/Firefox shares the same restriction (taken from the JS-API) that you cannot ever grow a wasm memory above 16GiB.
In practice it's unlikely that you'll be able to even grow to 16GiB. On 32-bit systems we have a much lower limit, and 16GiB of memory is relatively rare in the Firefox user base. Any web experience that requires more than 16GiB is unlikely to run for most users.
I'm open to raising the limit someday, but I'm skeptical it's going be used that often. It also really makes browsers tests of large memories difficult to do, as we've noticed that our test runners get OOM killed frequently once they start using that much memory.
@eqrion
Any web experience that requires more than 16GiB is unlikely to run for most users. I'm open to raising the limit someday, but I'm skeptical it's going be used that often.
I definitely agree this would be used by few people, and rarely.
But as I see it, the request here is for that small use case. That is, this isn't for something like a game that we want everyone to run. It is for something like a graphical editor, and the rare users of it that have huge data files that actually need more than 16 GB of RAM.
But as I see it, the request here is for that small use case. That is, this isn't for something like a game that we want everyone to run. It is for something like a graphical editor, and the rare users of it that have huge data files that actually need more than 16 GB of RAM.
That's fair enough. I guess what I was trying to say is that it's not as simple as just raising the limit, because it complicates our test infrastructure. We had a hard time getting 16GiB tests working reliably. So that needs to be balanced with the number of users.
But I'm open to doing it someday.
In my opinion it would be foolish to continue to increase the memory size limit without also adding new APIs that give you better control over memory. We still don't even have a way to release memory back to the operating system when you're done with it, for example.
Also, my impression is that those who want large address spaces actually want to do more sophisticated things with that address space, e.g. memory mapping or reserve/commit for their allocators. I would be hesitant to increase the limit, only to find out that it still lacks critical features for memory-hungry applications.