[Suggestion] Provide Escaping Options in Lexer
Describe the feature
Is it possible to provide an option in the lexer, like enableEscaping: boolean; that controls the lexer should turn any characters (e.g. &, <, > etc) into escaping characters (e.g. &, <, > etc).
const tokens = marked.lexer(text, {
enableEscaping: false,
});
Why is this feature necessary?
- For example, when I get a token like
{type: 'codespan', raw: '<h1>', text: '<h1>'}. I wish to display the text normally. I have to use extra calculations like regular expressions to turn<h1>back to<h1>. - But this is essentially just undoing the process during the first time (default tokenizer in
marked) which caused meaningless extra calculations. - So my suggestion is to provide an option that prevents the escaping process in the first place inside the default lexer.
Describe alternatives you've considered I saw the issue answered in 2022 said I have to override the tokenizer to achieve this.
I noticed this a little bit ago. I am working on moving the escaping to the renderers instead of the tokenizers.
The Tokenizers should just pull the information out of the markdown and create tokens without changing it, then the renderers should handle any changes like HTML escaping.
Your proposal seems better and more logical.
Even though I’m only using the tokenizer and not the renderer in my case, do you still plan to make HTML escaping optional inside the renderer?
No. The default renderers will always escape since that follows the CommonMark spec but they can be overridden
Got it.