configlet
configlet copied to clipboard
lint: consider the JSON parsing/deserialization design
Main options:
std/json1a. The approach so far: parse into aJsonNodeand work only with that. 1b. Parse into aJsonNode, then unmarshall into some object usingto. 1c. Plusstd/jsonutilsAraq/packedjson- keeps everything as a string. Lower memory usage thanstd/json, and sometimes faster.planetis-m/eminim- deserializes usingstd/streamsdirectly to anobject. Doesn't fully support object variants, but maybe that isn't a problem for us.status-im/nim-json-serialization- deserializes usingnim-faststreamsdirectly to anobject. Probably the most mature third-party option. Currently has a large dependency tree, includingchronosandbearssl.treeform/jsony- deserializes fromstringdirectly to anobject.
(Note that disruptek/jason is serialization-only).
There are also some more obscure ones that I haven't tried, and don't know anything about:
Some of the above are possibly too lenient or require special handling in some edge cases.
Summary:
| Library | Permits a trailing comma? | Permits comment? | Duplicate key handling |
|---|---|---|---|
| Ruby stdlib json | :x: | :white_check_mark: | Uses last value |
std/json |
:white_check_mark: | :white_check_mark: | Uses last value |
std/json patched |
:x: | :x: | Uses last value |
packedjson |
:white_check_mark: | :white_check_mark: | Uses first value |
eminim |
:white_check_mark: | :white_check_mark: | Uses last value |
json_serialization |
:x: | :white_check_mark: | Uses last value |
jsony |
:white_check_mark: | :x: | Uses last value |
For example:
- There is no correct behaviour for a duplicate key - one library may produce an error, another may silently use the value of the first key, and another may silently use the value of the last key.
std/jsonpermits a trailing comma, and comments with//and/* */. This is the main reason that it took a while to tick the boxes for "the file must be valid JSON" in https://github.com/exercism/configlet/issues/249. But we now have own patchedstd/jsonwith stricter parsing.configlet lintmust exit with a non-zero exit code for a trailing comma because the Ruby library that parses it later produces an error for a trailing comma.- Some libraries may use a default value when the key is missing, which we might want to distinguish from e.g. a value that is the empty string.
- Edge cases around a literal
null.
See also:
- https://einarwh.wordpress.com/2020/05/08/on-the-complexity-of-json-serialization/
- https://labs.bishopfox.com/tech-blog/an-exploration-of-json-interoperability-vulnerabilities
I'd suggest that jsony or nim-json-serialization might be best in the long-term. But maybe it's better to stick with the current approach until we've implemented all the linting rules, and refactor it later.
One advantage of the current approach is that it's more low-level, which might better ensure that we're "checking the JSON file itself" rather than "checking that each value is valid when parsed with library X".
I would also be in favor of trying to use the standard library first, and see how far we can take it. I don't mind us writing a bit of verbose code.
@ErikSchierboom updated the top post with some investigation of behavior in edge cases.
We probably want to define "valid JSON" in the spec, and perhaps explicitly forbid duplicate keys.
I'd suggest that jsony or nim-json-serialization might be best in the long-term. But maybe it's better to stick with the current approach until we've implemented all the linting rules, and refactor it later.
I don't know, both jsony and nim-json-serialization seem to be maintained by relatively few people and I'd be hesitant to use those libraries instead of the built-in JSON library. Its also telling that so far, no track has actually had this issue with trailing commas, which leads me to believe that it is not that big of a deal.
But it's probably simple to use a modified std/json with stricter parsing.
You mean forking the existing code? Would this be something that you could PR to Nim itself?
But it's probably simple to use a modified
std/jsonwith stricter parsing.You mean forking the existing code?
I meant that we do some workaround such that import std/json instead uses our own modified version of lib/pure/json.nim and/or lib/pure/parsejson.nim. We can do this by either:
- Adding the modified file(s) to this repo, and when running our GitHub Actions workflows, replace file(s) in the Nim installation directory.
- Adding the modified file(s) to this repo, then add a call to
patchFilein ourconfig.nimsfile. This is better because it also affects the local development environment in the same way. - Apply some transformation to the AST of the relevant procs at compile-time. This is clever, but less obvious and less self-documenting.
- Like 3, but import the Nim compiler and apply the transformation as an extra compiler pass. Again, probably too clever.
Would this be something that you could PR to Nim itself?
Yes, it's possible to add it to Nim itself. But it wouldn't be available until Nim 1.6.0 anyway, which might take a while. (We shouldn't build a configlet release with the devel Nim compiler).
It would be added an opt-in strict mode, since making parseJson or parseFile strict by default would completely break backwards compatibility.
I'd suggest we should do option 2 in the meantime regardless. The main downside is that we wouldn't immediately get upstream bug fixes in the patched files, unless we backport the latest changes manually. But such upstream changes to std/json would rarely affect us anyway, and backporting should be trivial (given that our diff is probably small) if necessary.
@ee7 I think that sounds like a good plan! 👍
In the meantime, we:
- Merged https://github.com/exercism/configlet/commit/15c84037fadc337f2949710e9ea35fd7def6961c, which forks
std/jsonandstd/parsejson - Started using
jsonyfor the "multi-key" checks inconfiglet lint
However, I'm still undecided about the best overall design for a refactor (not a high priority). For example, we could:
- Use only
std/json. Here's, it's probably best to do a first pass to check the types, then usejson.toto get an object. - Do all the "single-key" checks with
std/json, then more complex checks withjsony. This is the current approach. - Check only the types using
std/json, and do all the other checks after deserializing viajsonyto an object. - Avoid
std/jsonentirely, and use onlyjsony. This is attractive from a performance perspective: we avoid the allocation of some dynamicJsonNode, which consumes nearly all of theconfiglet lintruntime, and instead directly populate an object. Performance isn't my top priority, but this would probably also help make the codebase more robust/readable/maintainable.
The latter two options have the downside of increasing our dependence on a non-stdlib package. However: the jsony source code is pretty short, and the author is a prolific and well-known member of the Nim community, who parses a lot of JSON.
The latter options also give us less control over error messages. For example, if we use jsony only, the most straightforward implementation means we'll only get one error message for a file that has a type error as well as other problems. And unless jsony gains a strict mode, we'll have to fork it to disallow at least:
- Trailing commas (assuming that stays necessary for later parsing by Ruby)
- Different key name capitalization
Avoid std/json entirely, and use only jsony. This is attractive from a performance perspective: we avoid the allocation of some dynamic JsonNode, which consumes nearly all of the configlet lint runtime, and instead directly populate an object. Performance isn't my top priority, but this would probably also help make the codebase more robust/readable/maintainable.
I honestly don't care for performance as configlet is already incredibly fast. Robustness/readability/maintainability are much more important for configlet.
The latter options also give us less control over error messages. For example, if we use jsony only, the most straightforward implementation means we'll only get one error message for a file that has a type error as well as other problems.
So if I'm interpreting this correctly, jsony is different from std/json in that it returns an error message if a type mismatch between the JSON content and the type to serialize to occurs? And in that case jsony only returns one (the first?) error? If so, I'd be totally fine with that. We'd be able to remove tons of validation code and type errors should be quite rare.
And unless jsony gains a strict mode, we'll have to fork it to disallow at least:
Would that be a lot of work?
jsony is different from std/json in that it returns an error message if a type mismatch between the JSON content and the type to serialize to occurs?
Yes. We can also fail fast for e.g. seeing a slug that is not kebab-case.
And in that case jsony only returns one (the first?) error?
Yes. Although I imagine it's technically possible to output all the type mismatches - but probably not worth it.
If so, I'd be totally fine with that. We'd be able to remove tons of validation code and type errors should be quite rare.
I was thinking the same. Another advantage is better error messages: we'd find type errors at the time of parsing, and so we still have direct access to line number information.
Would that be a lot of work?
I'd guess/hope that it wouldn't be too bad. It might even be simpler than having a separate first pass that checks the JSON is valid and that key name capitalization is correct. We could also try maintaining a patch, if the diff is small (this is what we do with cligen's parseopt3.nim currently).
Another advantage is better error messages: we'd find type errors at the time of parsing, and so we still have direct access to line number information.
This is a very important point.
We could also try maintaining a patch, if the diff is small
This has worked out well with parseopt3.nim. That file is from the standard library though, isn't it? I'm asking, because that file probably changes less than the jsony source code (see its commits).
I've looked at what should be patched:
- Trailing commas (assuming that stays necessary for later parsing by Ruby)
- Different key name capitalization
The first one is still required, as I've just checked it with the latest Ruby version. The second one, what is that about? Is jsony strict about the casing of keys?
This has worked out well with
parseopt3.nim. That file is from the standard library though, isn't it?
It's from cligen/parseopt3, which is indeed derived from std/parseopt.
For us, one of the main differences is that parseopt3 supports separating a short option and its value with a space, like:
configlet sync -e bob
(see https://github.com/exercism/configlet/commit/a897d05cb0b4 for background).
I'm asking, because that file probably changes less than the jsony source code
Yes, jsony will probably see more churn than parseopt3. But, in the same way that cligen receives lots of work that doesn't touch parseopt3, maybe a patch would touch only relatively stable code (from the below, it only needs to forbid trailing commas for a jsony-only approach).
[checking for trailing commas] is still required, as I've just checked it with the latest Ruby version.
OK - thanks.
Is jsony strict about the casing of keys?
It's not completely strict, but it's stricter than I thought/remembered.
It turns out that the only looseness is "the value of a snake_case JSON key does set the value of a camelCase nim object field".
See the jsony docs, and the relevant jsony code.
I've tried to illustrate below how jsony behaves. Feel free to stare at this:
import pkg/jsony
type
ObjA = object
foo_bar: int
ObjB = object
anotherField: int
func init(T: typedesc[ObjA | ObjB], s: string): T =
fromJson(s, T)
# Summary: jsony is stricter when the object field name is snake_case style.
func main =
block:
let t = ObjA.init """{"foo_bar": 1}"""
doAssert t.foo_bar == 1
# The value of a camelCase JSON key DOES NOT set the value of a corresponding snake_case field.
block:
let t = ObjA.init """{"fooBar": 1}"""
doAssert t.foo_bar == 0 # The default value.
# And other capitalization is also not accepted.
block:
let t = ObjA.init """{"foo_Bar": 1}"""
doAssert t.foo_bar == 0
block:
let t = ObjA.init """{"foobar": 1}"""
doAssert t.foo_bar == 0
# ----------------------------------------------------------------------------
# The value of a snake_case JSON key DOES set the value of a corresponding camelCase field.
block:
let t = ObjB.init """{"another_field": 1}"""
doAssert t.anotherField == 1
block:
let t = ObjB.init """{"anotherField": 1}"""
doAssert t.anotherField == 1
# But other capitalization is not accepted.
block:
let t = ObjB.init """{"another_Field": 1}"""
doAssert t.anotherField == 0
block:
let t = ObjB.init """{"anotherfield": 1}"""
doAssert t.anotherField == 0
main()
Summary: I think that unpatched parsing with jsony alone is sufficient to check JSON key names (and everything except trailing commas), as long as our spec for Exercism JSON files has no uppercase character in any key name, and we do one of these:
- Use snake_case for our Nim object field names that jsony uses (and silence the stylecheck hint - as we already do e.g. here).
- Use camelCase for our Nim object field names that jsony uses, and patch or fork jsony's default parsing of JSON objects.
- Use camelCase for our Nim object field names that jsony uses, and provide a different
parseHookfor our specific objects.
I'd suggest the first. Which leaves us doing one of these:
- Parse only with jsony, and patch it to error for a trailing comma.
- Do almost everything with jsony, but do one pass with our existing patched
std/jsonjust to error for a trailing comma. - Do almost everything with jsony, but do one pass with some other Nim code, just to error for a trailing comma.
- Do almost everything with jsony, but make
configlet lintcall some non-Nim code, just to error for a trailing comma. For example, we could runfind . -name '*.json' -exec jq '.' {} + > /dev/nullwhen theCIenvironment variable exists (or whenjqis installed, which it is in CI). This is simple, but means that a trailing comma may be detected only in CI, and not locally. - Do almost everything with jsony, but add an org-wide workflow that runs the
jqcommand in 4. I think this is bad. - Parse only with jsony, and use a different ruby library (or use the current one, but change its options/patch it).
I think 1 is best, but we can do 2, 3, or 4 as a first implementation if it turns out that 1 is difficult.
There is some subtlety though: if a user runs configlet fmt when there is a trailing comma, should configlet error or remove it? What about configlet sync? We could consider being permissive in what we accept, and strict in what we output (robustness principle). So maybe a jsony patch with configurable trailing comma behavior...
I think 1 is best, but we can do 2, 3, or 4 as a first implementation if it turns out that 1 is difficult.
Agreed.
There is some subtlety though: if a user runs configlet fmt when there is a trailing comma, should configlet error or remove it? What about configlet sync? We could consider being permissive in what we accept, and strict in what we output (robustness principle). So maybe a jsony patch with configurable trailing comma behavior...
I wouldn't mind erroring on a trailing commas for configlet sync or configlet fmt, as the official spec does not support trailing commas so we would just be following the spec :)
@ErikSchierboom do you have reference link to the "does not support trailing commas"? I did not find mention of trailing commas, so I was not able to figure out if it is simply not mentioned or not supported or disallowed? I checked (searched for the word "trailing") v1.0 and v1.1 as it is here and did not find any mention of this. I might have missed it though.
@kotp That link is a spec for JSON APIs. For the standards for JSON itself, see the railroad diagrams on https://www.json.org:

And:
- https://datatracker.ietf.org/doc/html/rfc8259#section-4
- https://www.ecma-international.org/wp-content/uploads/ECMA-404_2nd_edition_december_2017.pdf (section 6)
They're allowed in JavaScript, though. See:
- https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Trailing_commas
Thanks @ee7. I saw that as well, though still can not find anything about trailing commas being either unsupported or allowed, or even a "should" statement regarding this. The https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Trailing_commas document states that it (JSON) disallows trailing commas but does not show where that information is made known.
So I would vote to not, if it is true that it is disallowed.