boa
boa copied to clipboard
Parsing an expression containing a huge list of token triggers a stack overflow
Hello,
I've stumbled across a weird bug in the javascript parser: when a single expression uses a lot of token (>120 in debug mode, and >3400 in release mode), the Parser::parse() triggers a stack overflow.
Here is a minimal way to reproduce the bug:
use boa::{
exec::{Executor, Interpreter},
js::value::ResultValue,
realm::Realm,
syntax::{lexer::Lexer, parser::Parser},
};
fn main() {
// A simple but huge expression
const src: &str = r#"
1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 +
1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 +
1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 +
1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 +
1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 +
1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 +
1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 +
1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 +
1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 +
1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 +
1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 +
1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 +
1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 +
1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 +
1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 +
1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 +
1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 +
1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 +
1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 +
1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1;
"#;
// build interpreter
let realm = Realm::create();
let mut engine: Interpreter = Executor::new(realm);
// lex input
let mut lexer = Lexer::new(&src);
lexer.lex().expect("lexing failed");
// /!\ parsing this token list triggers a Stack Overflow
let _ = Parser::new(lexer.tokens).parse_all();
}
My guess is that the function recursively calls itself to process each right hand of an addition, which slowly increase the size of the stack until it eventually overflows.
The only solution that I can think of, is to re-implement the parser into a stack machine, which would basically turns the recursive calls into a while loop (but this might represents a tons of work :S ).
Thank you for your time,
evomassiny
Thanks @evomassiny This is a great issue, I think we should certainly look at that. The reproduction steps are useful.
I’m not sure how the refactor would work right now but it’s something to think about. I’m open to ideas. Stack machine sounds about right.
Hi,
I found a way to build a parser without fonction recursion, by using an intermediate representation of the Abstract Syntax Tree in Polish Notation, and build a AST from it, in a second pass.
The benefit of the polish notation is its "flatness", we can represent the whole AST into a vector of expression, which make it easier to work with, specialy using a stack machine.
the main parsing algorithm would be something like that;
- collect tokens from the lexer
- starting with
idx = 0
- tries to identify an expression pattern starting from
tokens[idx..]
, if one matches:- build the associated expression, the expression type should hold the number of sub-expressions needed to execute it, but not the sub-expressions themselves
- push this expression into a
stack
, the sub-expression will be parsed in the next iteration - invalidate the parsed tokens, so the next iteration won't try to parse them again
- increment
idx
- tries to identify an expression pattern starting from
- Iter the
stack
in reverse to build the actual AST, starting from the leaves to the trunk.
The hard part is the implementation of the function that match expressions patterns (basically regex but for tokens).
To convince myself that it could be done, I implemented a parser for a subset of the javascript langage in this repo: https://github.com/evomassiny/toylang, If you want to see the part that build the AST in Polish Notation it's here, and the part that build an actual AST from it is there.
I hope this helps, but to be honest I don't have much knowledge of AST's parsers, there might be easier solutions.
Also look into https://en.wikipedia.org/wiki/Shunting-yard_algorithm
thanks @simonbuchan, that is what my current WIP implementation is based upon
Also look into https://en.wikipedia.org/wiki/Shunting-yard_algorithm
@simonbuchan thats a great algorithm! I believe our change is basically that with some tweaks
Also look into https://en.wikipedia.org/wiki/Shunting-yard_algorithm
@simonbrahan thats a great algorithm! I believe our change is basically that with some tweaks
Wrong @simon :P
Adding benchmark to keep track of expression parsing https://github.com/jasonwilliams/boa/pull/226
What algorithm does the parser currently use? I've not had the stack overflow problem with Pratt parsing. I even tried plugging the text in from #162 into my Pratt expression parser, and it parsed very quickly (and this is written in TypeScript: imagine the gains if it was in Rust).
The nice thing about Pratt parsing is that it integrates very naturally with recursive decent parsing.
This bug was fixed by #223
@jrop for this issue it was fixed using the Shunting Yard algorithm. Also if you have better ideas, please leave a comment on this thread #225
@jrop Pratt parsers are very similar to the shunting yard algorithm: both are based on attaching precedence and associativity to infix operators. The main difference seems to be that Pratt parsers store nested precedence levels on the call stack, while shunting yard handles that with an explicit stack. If you have a sensible number of precedence levels, Pratt parsing won't hit the stack limit, but it's theoretically a bit slower due to the theoretically higher number of calls: in practice you won't notice a difference. Really the difference is that there's much better documentation on how to implement Shunting Yard, Pratt parsers tend to use weird terms like "null denominator".
This bug still occurs - here are a few examples.
1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1+1
[[[[[[[[[[[[[[[[[[[]]]]]]]]]]]]]]]]]]]
Both of those panic in debug mode/the website WASM demo. You may need to increase the size of the expressions to cause a panic in release mode.
I will re-open this issue, since I think we are back again doing recursive parsing since we did our new parser. Maybe @jasonwilliams can confirm or give more insights.
The original description doesn't overflow anymore as there's been some optimzations but this example does still cause the issue: https://gist.github.com/jasonwilliams/9f461a7fac0e7721702d82b05fb1012c