Thoth.Json
Thoth.Json copied to clipboard
Why/when are you using the auto encoders/decoders when using Thoth.Json?
Hello,
I am not giving any leads to start the discussion on purpose because I am not a user of this feature and would like to understand why people favour it over the manual decoders. Or what problem auto coders are solving that the manual coders are not.
Sometimes for me it's just not having to write the coders for basic types, and only write coders for when I actually want control over the e/de coding. Would I survive without auto coders: yes!
I tend to use auto decoders when I have discrimination unions. Because there is nothing else out of the box...
I use that because 1. I share models between client and server. 2. Mostly it is enough to make things done in a less code way. 3. I am lazy :)
Auto coders are great. Why write boilerplate when you don't have to? Less code = less bugs.
It just works is a great argument :)
Thinking about it, I would say the basic reason is this: I don't really care about the json representation. I mainly want to get stuff from nice F# world to a string, and then safely get it back.
Auto coders are one of the main reason I switched from elm to Fable - thank you :)
- They just work
- Way less boilerplate
- Share types between the client and server
- it works best with simple types but then for large projects with lots of different types of updates I can do a post in one line of code and return either the object I want or a friendly user error
let inline deSerializeOrError<'a> (text:string) (isCamelCase:bool) =
match Thoth.Json.Decode.Auto.fromString<'a>(text, CamelCase) with
| Ok obj -> Ok obj
| Error e ->
match Thoth.Json.Decode.Auto.fromString<ApplicationError>(text, if isCamelCase then CamelCase else PascalCase) with
| Ok apperr -> ApplicationError.asString apperr |> Error
| Error _ -> Error e
let inline withJsonBody (data:'a) (req:RequestProperties list) : RequestProperties list =
List.append req [ RequestProperties.Body (Thoth.Json.Encode.Auto.toString<'a>(0,data,CamelCase) |> unbox) ]
let inline post<'a,'b> src siteKey method valObj () = promise {
let props = Shared.RestApi.http.postHeadersCustom (adminApiHeaders siteKey)
|> withJsonBody ( valObj )
let! response = Fetch.fetch (src + method) props
let! text = response.text()
return deSerializeOrError<'b> text true
}
I find this discussion interesting, I can add my thoughts but I disclaim I've never use Thoth.
Still I can tell that I considered adding auto encoders/decoders to Fleece long, long time ago, but never did the step because it would result in a dangerous mix:
Fleece itself is a Json FP mapper, which provides compile-time guarantees, in the sense that if someone change your model, you'll be forced to handle the changes in your Json mapping section.
Using auto-(de/en)coders results in a completely different assumption, which is also valid but mostly in scenarios where you don't care about what the json would look like, or when you control the change in a different way, ie: by using intermediate DTOs.
So, if you mix both approaches you will get the worst of both worlds :)
Anyway, in Fleece the boiler-plate pain is alleviated for primitive types by optionally using the generic (de)serializers which resolves at compile-time, so this is one less reason for auto(en/de)coders, as in the scenario mentioned by @giulioungaretti (if that what he meant with "basic types").
Also Fleece supports composing "alternatives", which allows to express codecs for DUs, so this is yet another less reason to use them, as in the scenario pointed out by @nojaf
Finally another difference, regarding avoiding bugs mentioned by @l3m (although his use case is clearly the "I don't care about the json" one), is that Fleece provides the possibility to define codecs, so both ways in a single shot, which avoids the need for the user to create roundtrip unit-tests.
My conclusion is, I would "never say never" will add auto(en/de)coders to Fleece, but if/when I do it, the dangers of mixing them would have to be carefully documented and possible force to link an additional dll, or open a specific namespace.
I think here in Thoth, the situation is more or less the same, with the addition of the cases above mentioned which justify a bit more their existence. But still users should be aware of the switch in paradigm they carry on, when mixed with manual (en/de)coders.
I'm a user of Thoth.Json.Net. I use the library for whenever I need serialization in an F# .NET application. I don't use the type sharing between Fable and .NET apps at the moment, but I think that's a useful feature.
The question is formulated in a way where I don't know if my answer is on topic, but it could be relevant if you want to have the entire picture of the users of the library, so I'll post it. More information is good :)
I don't use auto encoders / decoders. I don't plan on using them. I'm good with adding a little "boilerplate" to have control and explicitness. If having the feature does not severely affect the contract (and it seems that it does not from my PoV) then it's a plus IMHO. It drives adoption, which is good.
Most of my experience is with JSON.net, and that got me in the habit of defining separate DTOs with simple data types that I map manually to more rich domain types.
I’ve only used Thoth a little bit. I found the auto (en/de)coders to be similarly quite useful when combined with simple DTOs
I do like that (en/de)coding manually allows me to skip the separate DTO step if I want to, but nowadays with anonymous records I often use those for the DTOs so it’s not a big deal anyway.
Another example where I actually would not know what to do without auto coders!
type Serializer(extraCoders, logger: ILogger) =
inherit CosmosSerializer()
member _.Logger = logger
override this.FromStream<'T>(stream: System.IO.Stream): 'T =
this.Logger.LogDebug(sprintf "Serializer from stream \n")
use sr = new StreamReader(stream)
use jr = new JsonTextReader(sr)
let decoder =
Decode.Auto.generateDecoderCached<'T> (extra = extraCoders)
let token = JToken.Load(jr)
this.Logger.LogDebug(sprintf "Token from stream %O \n" token)
let res = Decode.fromValue "" decoder token
this.Logger.LogDebug(sprintf "Result from stream %O \n" res)
match res with
| Ok res -> res
| Error err -> raise (JsonSerializationException err)
override this.ToStream<'T>(input: 'T): System.IO.Stream =
this.Logger.LogDebug(sprintf "Serializer to stream %O \n" input)
let ms = new MemoryStream()
use swr =
new StreamWriter(ms, Encoding.Default, 1024, true)
use jr = new JsonTextWriter(swr)
let encoder =
Encode.Auto.generateEncoderCached<'T> (extra = extraCoders)
let json = encoder input
json.WriteTo(jr)
jr.Flush()
swr.Flush()
ms.Position <- 0L
ms :> System.IO.Stream
Thank you all for your answers.
@gusty:
My conclusion is, I would "never say never" will add auto(en/de)coders to Fleece, but if/when I do it, the dangers of mixing them would have to be carefully documented and possible force to link an additional dll, or open a specific namespace.
You are 100% right, originally the auto coders were aimed to be simple. I accepted to add them in order to map simple types and if people wanted control over the output they should use manual coders. But, over the time people wanted more and more control over the auto coders for adding unsupported types, custom decoding/encoding process when meeting certain condition and I kept agreeing for adding these new features.
@nojaf
I tend to use auto decoders when I have discrimination unions. Because there is nothing else out of the box...
Depending on how you represent them it is true you could use Decode.oneOf, Decode.andThen, Decode.object.
For encoding them, in general, I use standard active pattern.
I am not sure what we can improve here for DUs for the manual experience without involving reflection and so going to auto coders indeed.
As a side note @tangledupinblue, there is also Thoth.Fetch which is linking Fetch and Thoth.Json together so you don't have to write your own "bridge" code. I just wanted to mention it in case you didn't know of the project.
I must admit that at first my goal with this question was to be able to remove the Auto coders from Thoth.Json because it is becoming a monster (hard to maintain and I think there are bugs on the edges) and IHMO people abuse it...
I wasn't planning to just remove them and leave people blocked ^^ I was thinking on adding code-generation as a replacement because I thought people prefer to use auto coders to avoid writing "boilerplate" and well because we as a developer are lazy as some of you said :)
However, code-generation can't cover all the cases.
Like @giulioungaretti shown in his last example, auto-coders are able to work with generics while code generation will not be able to do that.
If I try to summarise, people use auto coders:
- When not caring about the JSON representation
- When sharing F# code on both sides (.Net / Fable)
- By easiness/laziness
- Because it supports generics
For some users, auto coders are the reason of why they adopted Thoth.Json.
I share types between fronted and back end. I prefer auto decoders where I can use them, because it is less code to write and less code to read.
It a big selling points for Fable when I discuss it with people. Not having to write any code to share types(and some functions) between BE and FE. I have one situation where
I have to use manual decoders because one recursive type in my project. Its so painful view. So much code, so much noise. I try to hide this file from people. :-)
I like the idea of manual coders for case: "This is especially useful if you use Fable without sharing your domain with the server."
EDIT: I updated Thoth.Json to 4.1.0 and autodecoders work on recursive types. I do not need manual encoding on my projects now. Thank you.
I don't want to open another issue since I have another issue opened on Thoth.Fetch that I haven't fully followed up on yet, but would using it for a type that's nested about 4 layers deep count as abuse?
For me, an async computation just hangs on Decode.Auto.fromString without an exception or anything. Trying to decide if I should just suck it up and write a manual coder.
Update: It didn't like that I was using a unit of measure <cad> on decimal. I added an extra coder, and voila.
I guess UoMs aren't erased in Fable. Still not quite sure why it hangs the async though.
Update 2: Wait, my mistake, removing the UoM was the only thing that fixed it. An extra coder didn't.
Just came across this.
My reason for using them: they work, like really well. Saved me quite a bit of hassle to be honest. And I've pushed quite some hierarchies through them. :)