encoding icon indicating copy to clipboard operation
encoding copied to clipboard

Corner cases arising from Big5 encoder not excluding HKSCS codes with lead bytes 0xFA–FE

Open harjitmoe opened this issue 4 years ago • 6 comments
trafficstars

https://encoding.spec.whatwg.org/commit-snapshots/4d54adce6a871cb03af3a919cbf644a43c22301a/#visualization

Let index be index Big5 excluding all entries whose pointer is less than (0xA1 - 0x81) × 157.

Avoid returning Hong Kong Supplementary Character Set extensions literally.

As become apparent in my attempts to chart different Big5 and CNS 11643 variants: if the intention is to make the encoder purely Big5-ETEN, excluding all further extensions that Big5-HKSCS adds on top of it, then lead bytes 0xFA–FE need to be excluded, not just 0x81–A0.

The only-partial exclusion of HKSCS in the encoder defined by the current standard actually creates some truly bizarre corner cases, insofar as how it interacts with index-big5's inclusion of the duplicate mappings inherited from GCCS (which a lot of even HKSCS-equipped Big5 codecs, e.g. Python's big5-hkscs, do not accept).  Some of these duplicated other GCCS/HKSCS codes, rather than standard Big5 codes.  In four cases, one of these GCCS duplicates has a lead byte in 0xFA–FE, while its standard HKSCS code has a lead byte in 0x81–A0.  Hence, the WHATWG-described behaviour finishes up decoding them from both, but encoding them to their GCCS duplicates as follows.

0x9DEF → 嘅 U+5605 ↔ 0xFB48
0x9DFB → 廐 U+5ED0 ↔ 0xFBF9
0xA0DC → 悤 U+60A4 ↔ 0xFC6C
0x9975 → 猪 U+732A ↔ 0xFE52

Accepting these GCCS duplicates is probably fine, but generating them (when not even all HKSCS-equipped implementations will accept them) is probably inappropriate, even assuming (for sake of argument) that the encoder's current partway-house between Big5-ETEN and Big5-HKSCS was deliberately chosen.

harjitmoe avatar Feb 15 '21 20:02 harjitmoe

Thank you for reporting this!

@foolip @hsivonen @ricea thoughts? Assuming this is correct, while there is some risk in changing the encoder, it's usually fairly minimal, right?

annevk avatar Feb 16 '21 07:02 annevk

Seems minimal-risk, yes. Indeed, the range above the original Big5 range has been mentioned as questionable to include in the encoder before when the range below is excluded.

hsivonen avatar Feb 16 '21 08:02 hsivonen

I'm confused since according to the note:

There are other duplicate code points, but for those the first pointer is to be used.

we should not be returning the 0xFxxx duplicates anyway. What am I misunderstanding?

ricea avatar Feb 16 '21 08:02 ricea

Step 1 of https://encoding.spec.whatwg.org/#index-big5-pointer excludes some of them.

annevk avatar Feb 16 '21 09:02 annevk

Okay, I get it now. This change seems reasonable I think, but it won't be a high priority for Chrome.

ricea avatar Feb 16 '21 09:02 ricea

When I run

import json

data = json.load(open("indexes.json", "r"))

big5 = data["big5"]

code_points = {}
pointer = 0
for code_point in big5:
    if code_point != None:
        if code_point not in code_points:
            code_points[code_point] = [pointer]
        else:
            code_points[code_point].append(pointer)
    pointer += 1

for code_point in code_points:
    pointers = code_points[code_point]
    if len(pointers) > 1: # It's either 1 or 2
        excluded = "no"
        if pointers[0] < 5024 and pointers[1] < 5024:
            excluded = "yes"
        elif pointers[0] < 5024 or pointers[1] < 5024:
            excluded = "partial"

        print("U+" + hex(code_point).upper()[2:], pointers, excluded)

it seems we have many other pointers for duplicates we probably want to keep excluding? If so, the fix here would likely be to special case the code points listed in OP.

annevk avatar Feb 16 '21 10:02 annevk