go-bitbucket
go-bitbucket copied to clipboard
Why use mapstructure.Decode?
The decoding of the Atlassian API responses is primarily done using mapstructure.Decode
.
Is there a compelling reason to use this instead of json.NewDecoder
?
@oogali Before the module was used, remapping the response from json to structure was confused. There were many map[string]interface{}
or map[interface{}]interface{}
on the codes and we had a hard time to mapping json responses to some structs. Then, a contributor proposed use to this.
That makes sense.
Would you have any interest in me modeling out the structs according to the Bitbucket API so that one could use the standard library JSON marshaling functions?
For what it's worth I would highly appreciate something like that @oogali. Seems like if we have it modeled it's be way easier to Marshall/Unmarshall everything into place, no? Versus ranging through all the map[]interface{}'s.
I'd be happy to help you wherever you want as well.
Previously, bitbucket (atlassian) team was often change some api responses and each of field types without notice. Because I had decided not to align the structs with the api responses and fields. Keeping it maintained is very painstaking, I thought.
Recently, I feel that the structure of the api responses has become stable, so it may be okay to create structures of responses. But all of the responses are huge as usual and most of the responses are unnecessary information for users. It seems that users extracts and uses only a small part of response fields. Therefore, I think that the correction/refactoring work related to the response may have poor cost performance.
Those are good points and I definitely know what you mean; I agree. Tbh it's not really a burden to parse the returned struct for any specific nested info one may need. This is probably a case where it's best to write no code.
I've used segmentio/encoding in projects where I needed to maintain high performance of encoding/decoding JSON with limited resource usage.
I haven't benchmarked it against a map[string]interface{}
type. But while I imagine the performance of decoding into a map may be faster, all of that performance is lost in copying data or "re-marshaling" the data into a different object.
We're pushing the responsibility for the creating, maintenance, and testing of the decodeThing
style of functions down to the end users of this library which feels somewhat counterintuitive.
Would you please try to writing a POC or a benchmarking code? I am a little interest.