alex icon indicating copy to clipboard operation
alex copied to clipboard

Provide macros corresponding to the Unicode general categories

Open sjakobi opened this issue 6 years ago • 11 comments

Right now it seems very difficult to write a rule e.g. for words starting with an uppercase letter.

sjakobi avatar Jul 27 '18 05:07 sjakobi

What have you tried? I thought this was as simple as:

   [A-Z][a-zA-Z]*

which means one upper case and then zero or more mixed case.

erikd avatar Jul 27 '18 23:07 erikd

The Unicode uppercase letter category is quite a bit larger than [A-Z]. Then there are questions like, should e.g. $upper include titlecase letters?

It might make sense to have macros both for the predicates Haskell programmers are used to from Data.Char and for the Unicode general categories…

sjakobi avatar Jul 28 '18 00:07 sjakobi

Ah ok. I think my response shows that your initial question was not specific enough.

If you specify exactly what it is you want to do and why the current functionality is insufficient, you will get much more useful responses than my one above.

erikd avatar Jul 28 '18 00:07 erikd

If you specify exactly what it is you want to do and why the current functionality is insufficient

Right. :) I should have done that first. :)

I want to detect Haskell identifiers. For that I need the following character sets:

  • lowercase, uppercase and titlecase letters
  • decimal digits
  • symbols and punctuation

sjakobi avatar Jul 28 '18 14:07 sjakobi

Forgive me for being pedantic here, but that is not what you are asking for.

You rejected my suggesting above saying that [A-Z] does not include Unicode. This suggests you want more than just "lowercase, uppercase and titlecase letters" because for most people with English as a first language, that means [a-z] for lowercase and [A-Z] for uppercase.

Furthermore "symbols and punctuation" can mean different things in different programming languages and even in different human languages so there is not one single solution.

Maybe looking at the lexer for CHG itself will provide you some inspiration.

erikd avatar Jul 29 '18 02:07 erikd

Maybe looking at the lexer for CHG itself will provide you some inspiration.

Thanks. Yeah, in the end I want a lexer that detects the same identifiers that GHC itself will lex.

But I don't want to replicate GHC's strange Unicode workaround.

If Alex could provide macros corresponding to the Unicode general categories, building the lexer would be quite easy.

sjakobi avatar Jul 29 '18 10:07 sjakobi

I think my concern with lexing UTF-8 directly in Alex for Haskell source code was that the generated state machine might be huge. I didn't actually do that experiment though, I'd be interested in the results.

simonmar avatar Aug 19 '18 09:08 simonmar

I'm not sure what to look at in the generated Haskell files to see if that's a problem. Language.Javascript lexes UTF-8 directly. I did something similar while experimenting with R7RS Scheme parsing. Perhaps someone who knows what they're looking for could check if this approach causes problems?

JKTKops avatar Jan 03 '21 00:01 JKTKops

Per https://github.com/simonmar/alex/pull/165, I would like to unfuse the UTF-8 and user-written automata to decluttter the implementation, which we speculate is a bit confused because it might predate Char properly supporting Unicode.

(Even better would be to then go implement proper automaton composition to allow the the user to choose whether to fuse or not fuse the automata (when the underlying string is byte- rather than character-oriented), and start exploring the proper categorical semantics of the language specs themselves! But I am getting star-eyed and off-topic.)

Back to the point, once things can work Char-by-Char nice and simply, I hope character classes for arbitrary Unicode code-points will be a breeze.

Ericson2314 avatar Jan 04 '21 06:01 Ericson2314

This will be very useful indeed! I just did something similar to https://github.com/simonmar/alex/issues/126#issuecomment-753546545 and wish such a support exist.

Javran avatar Mar 18 '21 08:03 Javran

Just want to add few notes on this issue:

(A bit of background: I'm following Java SE 16 Spec to write a parser for fun, so my knowledge below is based on my experience following that spec)

One workaround I tried is to let Alex accept a wider language, say Java forbids Unicode outside identifier and literals. So I can take advantage of that fact to be specific only on \x00~\x7F range:

$JavaIdentifierStartLite = [\x24\x41-\x5A\x5F\x61-\x7A\x80-\x10ffff]
$JavaIdentifierPartLite = [$JavaIdentifierStartLite\x00-\x08\x0E-\x1B\x30-\x39\x7F\x80-x10ffff]

and then I can deal with them in AlexAction. However this doesn't work for several reasons:

  • Java SE 16 follows Unicode 13.0, but Data.Char.generalCategory depends on UnicideData shipped with GHC itself so I'm observing some differences due to this misalignment
  • Another thing I attempted is to store the set of accepted Char as a set (this set can be obtained by calling Character.isJavaIdentifierStart and Character.isJavaIdentifierPart iterating through all Unicode codepoint values on a JVM language), however both Data.IntSet and Data.HashSet are very slow even on my small test suites, in comparision making Alex to accurately recognize identifers has way better performance.

My key takeaways:

  • Data.IntSet and Data.Set are way slower than having Alex handle it (please let me know if there are other alternatives, as my Alex approach kind of "cheated" by grouping consecutive ranges like x30000-\x3134A rather than storing every integer individually)
  • If we were to support general category macros, should we provide a way to allow user to supply a UnicideData version? (and how exactly should this be implemented?)

Javran avatar Mar 20 '21 04:03 Javran