purescript-cookbook
purescript-cookbook copied to clipboard
Decode JSON using type class based approach via `argonaut-codec`
Recipe Name
DecodeArbitraryJSON
Recipe Description
i'm suggesting that this Stack Overflow question would make a good simple recipe: https://stackoverflow.com/questions/56533551/purescript-argonaut-decode-arbitrary-key-value-json
The question seems like something that would occur to other people and the accepted answer is very close to ready to package as a recipe:
import Data.Argonaut (decodeJson, jsonParser)
import Data.Either (Either)
import Data.Map as Map
import Foreign.Object as F
decodeAsMap :: String -> Either _ (Map.Map String (Array String))
decodeAsMap str = do
json <- jsonParser str
obj <- decodeJson json
pure $ Map.fromFoldable $ (F.toUnfoldable obj :: Array _)
I feel like this is more an example of how to decode "Map String (Array String)" JSON, rather than "arbitrary" JSON.
But lots of JSON decoding examples with argonaut would be helpful. I think all of those could be combined into a single recipe.
Yes, the way it's phrased there just cut and pasted from SO i think that's true. But i still think there's something to the idea of a recipe that is a counterpoint to another recipe for "how do i handle JSON" which would model the JSON with types and write encodeJSON / decodeJSON functions.
i mean, both types-known-in-advance and types-unknown-in-advance will necessarily be quite arbitrary examples of JSON structure, if you see what i mean.
What if this the recipe was framed as "How to decode objects"?
- If we know what fields to expect: Decode to record.
- If fields are unknown: Decode to Map, Array of Array, or Array of Tuple
There's potential for an enormous number of JSON encoding and decoding recipes. I think it would be wise to do some planning on what's important to cover, and then try to minimize overlap between examples. Do we want to include examples for everything in the argonaut guide?
That sounds totally right...even just one or two of the second category would give people the flavor of the idea. And pair / contrast very well with the "unknown fields" example in the SO question.
Not sure what names one might give those recipes and, as you say, some planning required. Probably something best discussed on Discourse?
FWIW, the Dog API is public and returns JSON very similar to the shape in that StackOverflow question, could be useful as an example of the "if fields are unknown" case.
https://dog.ceo/api/breeds/list/all returns
{
"message": {
"affenpinscher": [],
"african": [],
"airedale": [],
"akita": [],
"appenzeller": [],
"australian": [
"shepherd"
],
"basenji": [],
"beagle": [],
...
I'm thinking the JSON decoding examples should just be from hardcoded strings, or perhaps strings generated over FFI from JSON.stringify.
I've had issue in the past with API demo service downtime.
One thing to note here is that there are three (relatively?) popular ways to decode (and even encode) from (/to) JSON:
- argonaut
- simple-json
- codecs
So which one would get to be the blessed library to be exposed in the cookbook? I personally would opt for codecs, but I know a lot of people prefer the other options. How do we decide this?
I'm a fan of argonaut-codecs.
Haven't tried out codec-argonaut yet, but that also has appeal.
Not sure what you meant by codecs. Would be great if their names were more distinct (https://github.com/garyb/purescript-codec-argonaut/issues/32)
There's the core Argonaut library which can be used for manual parsing, and then argonaut-codecs for typeclass-based decoding and encoding, and codec-argonaut for bidirectional decoding & encoding.
If you were trying to avoid opinions then argonaut-core is an option.
I think argonaut-codecs is the most accessible library and I tend to use it for quick and dirty or one-way encoding or decoding. I think it makes the most sensible default choice for the cookbook with that in mind.
I think codec-argonaut is the better option for building applications and long-term maintenance. I am going to switch Real World Halogen to use it. But it's also more advanced than argonaut-codecs and its advantages may not be as clear in the cookbook format.
Maybe both argonaut-codecs and codec-argonaut have room to exist in the cookbook together, but if there were only one, I'd choose argonaut-codecs.
(Edit: I haven't used simple-json in a long time, but afaik its main advantage over argonaut-codecs was automatic encoding / decoding of records, which was added to argonaut-codecs a year or two ago.)
One thing to note here is that there are three (relatively?) popular ways to decode (and even encode) from (/to) JSON. .... So which one would get to be the blessed library to be exposed in the cookbook?
I think this kind of misses the point. The cookbook isn't here to say which is the "right" way, merely how to use each way. We didn't make a choice as to which front-end framework (e.g. Halogen, React, is the "blessed" framework by not allowing PRs from other frameworks. Why would we make such a choice here? Each of those libraries chooses to solve a given problem in a different way. Each developer has different reasons for preferring one over another.
(Edit: I haven't used simple-json in a long time, but afaik its main advantage over argonaut-codecs was automatic encoding / decoding of records, which was added to argonaut-codecs a year or two ago.)
I'm not familiar with simple-json so I don't know how it similar/dissimilar it is to argonaut-codecs. Would someone care to examine that?
So, I'm proposing we change a few things in this issue:
- Split this issue into two (or maybe 3 depending on
simple-jsonquestion above) issues. We'll use this issue for type class-based codecs, since that's what initially opened the issue. We'll use the second issue for value-based codecs (for lack of a better term). Ifsimple-jsonis still worth using, we'll add a third issue for that. - Change the unique recipe name to
TypeClassBasedCodecLog(the recipe description can highlight that is usesargonaut-codecs) and reserveValueBasedCodec(the recipe description can highlight that is usescodecandcodec-argonaut) for the second issue.
Lastly, I've been meaning to add a codec-based example here at some point, but I didn't get around to it.
I'd also propose we use "meta-language" to describe things, too. I believe this is valid JSON, but feel free to correct me if I'm wrong:
{
"string":"string value",
"boolean": true,
"int": 4,
"number": 42.0,
"array": [
"elem 1",
"elem 2",
"elem 3"
],
"record": {
"foo": "bar",
"baz": 8
},
"sumTypesNoTags": [
"Nothing",
"Just 1"
],
"sumTypeWithTags": [
{"tag": "Noting"},
{"tag": "Just", "value": 1}
],
"productTypesNoLabels": [
1,
true,
"stuff"
],
"productTypesWithLabels": {
"key1": "value1",
"key2": "value2",
"key3": "value3"
}
}
I believe the above would also reveal each approach's strengths and weaknesses.
I've added the value-based code example via #209.