Robin Freyler
Robin Freyler
Thank you for the investigation @alexcrichton ! If your assessment is correct I have to admit that it is a bit worrysome to me that the Wasm spec seems to...
I just ran `flamegraph` on the `translate/tiny_keccak/checked/lazy-translation` test case: `main`:  PR:  From what it looks I can confirm your assessment that multiple parts of the `wasmparser` validation pipeline...
In my last performance assessment I made a mistake by not also enabling `no-hash-maps` for the `main` branch when conducting benchmarks between `main` and the PR which made the results...
You are right that this might very well be just an implementation detail that can be fixed. Thanks a lot for the link to the `local.get` checks, I will have...
> If it helps, here is the implementation in SpiderMonkey [1]. We do an upfront check for if the local index is less than where the first non-nullable local is...
I updated the Wasmi PR to update `wasmparser` to `v0.218.0` and improved/extended translation benchmarks and reran them to get a better understanding of where performance regressions are located and could...
Oh thanks a lot for the information! I like #1845 a lot even if it won't improve performance, reducing compile times is also a huge gain already. :)
I opened https://github.com/bytecodealliance/wasm-tools/pull/1870 for minor gains in the `code` section validation. However, as the Wasmi benchmarks suggest, most of the performance regressions are in the Wasm validation parts outside the...
@alexcrichton I reran `flamegraph` on current Wasmi `main` compared to [the PR to update `wasmparser`](https://github.com/wasmi-labs/wasmi/pull/1141) to find out the most regressing part of `wasmparser` since `v0.100.0`. It is the `wasmparser::Validator::type_section`...
> Would you be able to extract an example module or two that showcases this slowdown? For example a module that only has a type section should be sufficient to...