oxc
oxc copied to clipboard
perf(semantic): avoid counting nodes
Instead of visiting AST to count AST nodes in SemanticBuilder
, just make high estimates of counts for nodes, scopes, symbols and references, based on source text length. These estimates are the maximum number of items theoretically possible for a given source text length. Almost always they'll be vast over-estimates, but should never be an under-estimate.
This ensures AstNodes
, ScopeTree
and SymbolTable
's allocations will not grow during their construction, which is a major performance hit, but avoids the complete AST traversal we were using previously to get accurate counts.
This "estimate high" approach is a viable strategy on 64-bit systems with virtual memory, where:
- The memory space is so large that it's almost impossible to exhaust.
- Allocating memory only consumes virtual memory, not physical memory.
So use this faster "estimate high" method as standard, and the slower visitor-based counter on 32-bit systems and WASM.
Note: ~~I tried using shrink_to_fit
on the Vec
s once semantic run is complete, to free up the excess memory. But this caused the Vec
s to all reallocate, which negated the performance gains.~~ It was an artefact of the custom global allocator we use in our benchmarking setup. Shrinking allocations does not cause reallocation.