Fix unicode character counting bug
This PR fixes a bug when counting unicode characters in num_utf8_chars().
The bug is in the code that does the bit shift to check the top two bits. Because the type is a signed char, when the top bit is set (indicating unicode) the value gets sign extended to an int (0xffffffXX) which then fails the comparison. The correct way to do this check would be: ((src[i] & 0xff) >> 6) != 2.
During investigation of the bug, I realized the reason for counting unicode characters was to determine if the string was all ASCII or not. I also realized a faster way of doing this check would be to check the top bit of all characters in the string and return when one is found. This way, the loop will return as soon as a non-ASCII character is encountered.
The following changes were made:
- Changed
num_utf8_chars()tois_ascii(), update the logic to iterate through the string until a non-ASCII character is found, and updateunicode_from_str()to useis_ascii(). - Added tests to validate the bug no longer exists (we're processing ascii/unicode strings correctly) and added some additional unicode tests from stdlib json.
This bit of code is incomplete, it's a reduced version of a more complex optimization.
Almost all of the time reading a typical large JSON document in Python (all python JSON parsers, not just py_yyjson) is spent creating strings, such as keys. We can optimize these cases drastically by building the PyUnicode objects ourselves, but to do that we need to know how many there are, and the highest code point (127, 255, 65535, or 1114111).
Thanks for your feedback. I don't understand how that function does what you say but I restored it anyway and just updated it do mask the value so the comparison would work. Please let me know if you have any other feedback/concerns. Thanks!
Sorry if I wasn't clear - the old code is incomplete. I stripped out most of the work to get a release out with other fixes and never came back to complete it. I'm just explaining why it counts instead of stopping early :)