Current state of the art
Hello,
Sorry if not the right place, I'm just not sure where else to ask the question. Is the project still active ? it seems broken on different things on my local environment.
yes; I did recently break some things I think... A lot of work went into coverage and testing... I think it used to be more forgiving on unquoted strings... can you give me an example of something broken for you?
Oh JSOX? I was thinking this was a different project; I don't know of any failure conditions - hence not a lot of updates? again though, if you have a simple test....
I have big json file here with a large array inside. It just returns an empty string for the array...
the version from NPM or git?
I have a question regarding the benchmark. The JSOX parse result in 4.0648 means its 4x times slower than the default parse ?
the version from NPM or git?
ah I installed it from NPM
I haven't published a new version :) can you please verify git and I'll tag and pulbish?
yes; JSOX ends up doing a lot more checks for some things; it has improved a bit... It does depend a lot on what sort of thing is being parsed though... so I started to kind of walk away from those. This shouldn't be that much slower; but I think because of the graceful string failure sometimes the paths are longer than they would be - like GatherString() is fast at going from a quote to a quote... it's a little more work to check each character...
I see, I'm having problem with parsing BigInt. I'm dealing with big chunk of them so store all as string will potentially increase the traffice size. But then 3rd parser have performance trade-off. JSON with BigInt is real pain.
JSOX seems to perform best so far. However, I tested JSOX and noticed that on very big number it stills lose precision. jsox.parse('{"start": 1539786952342493550}') will return 1539786952342493400
Again this is the npm version so I don't know if anything has been fixed. I'm using my forked version of json-bigint. Made a PR already. It's a bit slower (5x times).
bigint requires a 'n' suffix...
I could maybe add a test for a larger value than N and force bignum conversion... (as long as it isn't a float/having a decimal or exponent)
case VALUE_NUMBER:
if( ( val.string.length > 10 ) || ( val.string.length == 9 && val[0]>'2' ) && !exponent_digit && !exponent_sign && !decimal )
isBigInt = true;
like line 360, when converting the 'value' to a value.... adding the if, and setting bigint would force precision for larger values?
4_294_967_295 is 10 the '9' should be 10 also
Ah I see, indeed in json-bigint it enventually tests the length
bigint behavior changed slightly; published new version with old and new detections. Implemented user defined type support tests.
sack.vfs JSOX support evolved somewhat from this, and user types became incompatible.
Found failures reviving some references, and later reviving user types with references.
Don't foresee changes in the future.