json
json copied to clipboard
Generalize JSON macro to any serializer
I really like json! macro but would want to use it even with other serializers or at least for streaming serialization of JSON structure (not constructing entire Value tree in-memory and then flushing altogether).
It looks like it should be possible to add intermediate macro that would accept $serializer:ident and use it with serialize_map / serialize_seq instead of creating values, and then json! macro would be just a wrapper that calls intermediate one with internal Serializer just like to_value currently does, but I've noticed the comment about breaking changes and wondered if this is something that can be considered or should be done outside of this project.
cc @SergioBenitez
Awesome idea. I think this boils down to expanding json!(...) into some anonymous type that implements the Serialize trait.
serde_json::to_value(json!({ "x": [x] }))
serde_json::to_value({
struct _A<'a, T> {
x: &'a T,
}
impl<'a, T> Serialize for _A<'a, T> where T: Serialize {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where S: Serializer
{
let mut map = serializer.serialize_map(Some(1))?;
map.serialize_entry("x", &_B { x: &*self.x })?;
map.end()
}
}
struct _B<'a, T> {
x: &'a T,
}
impl<'a, T> Serialize for _B<'a, T> where T: Serialize {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where S: Serializer
{
let mut seq = serializer.serialize_seq(Some(1))?;
map.serialize_element(&*self.x)?;
map.end()
}
}
_A { x: &x }
})
Would you be interested in implementing this? Don't worry about backward compatibility, we can figure that out once we have something working.
Ah yeah, this might work too, wasn't sure about performance implications of intermediate types compared to series of calls though.
As for implementing this - I guess I can as soon as get a little bit more time to dig into existing json! macro to understand how it works :)
Actually, the way you suggested should be much easier to implement than mine, I'll give it a shot next week unless you want to try earlier.
Your way would not have worked because of the signature of SerializeSeq::serialize_element:
fn serialize_element<T>(&mut self, value: &T) -> Result<(), Self::Error>
where T: ?Sized + Serialize;
It requires that you pass in a Serialize type so "a series of calls" won't work.
Conceptually though, I would expect both ways to have identical performance.
@dtolnay When I was playing with this idea locally, for serialize_element I was assuming Serialize is already implemented (in fact, the entire point why I needed this macro is for simple Serialize implementations that can't be covered by Derive). For more generic case, I agree, it wouldn't work without creating intermediate type.
One problem with auto-generating types as above though is generating "random" identifiers for them - as far as I know, there is no easy way to do that in Rust currently.
You should be able to just use the same identifier for all of them and set up the scoping so that the right one gets resolved in the right place.
let mut map = serializer.serialize_map(Some(1))?;
map.serialize_entry("x", {
struct _S { /* ... */ }
impl Serialize for _S { /* ... */ }
&_S { /* ... */ }
})?;
map.end()
Btw, for sequences (static JSON arrays) we could convert them to tuples of references instead of temporary named structure types; in that case, Serialize is already implemented and we can pass such tuple upwards without hassle.
I would prefer not to do that because it would artificially limit the length of arrays you could use with this syntax. We only have Serialize impls for small tuples.
@dtolnay Well that should be solved when Rust implements constant-based generalization, although unsure how long it will take. Do you think users will be providing hardcoded JSON arrays that are bigger than tuple limit? (note that this doesn't limit interpolating vectors or any other expressions in value position, only hardcoded arrays)
Another issue I've currently run into is that it's needed to generate struct with type params for each given expression, and again, generating random names is hard and there is no typeof macro or something. I've been thinking about having just S<T> where T is tuple, but then I still can't match on it inside of implementation.
The only current idea is to create a temporary array / map of &Serialize and rely that dynamic dispatch will be reasonably optimized, but that doesn't feel like a right way to do it, so if you have better ideas, I'd be grateful to hear them.
One more option would be, instead of elements, to put closure into the struct, and in that case impl Serialize would just call that internal closure, which in turn has access to the original data. But again, I'm having concerns about performance with such dynamic dispatch.
@dtolnay Do you think we could limit keys to be just identifiers? That's what I did on our project and then things get much, much easier as you can just reuse same ident for generic type binding and struct keys.
I would strongly prefer not to limit keys to be just identifiers. Is there a way to structure the expanded code such that dynamically generated generic type bindings and struct keys are not required, for example by using tuples judiciously?
@dtolnay thought about tuples, but then we need to define serialization only for limited number of tuple sets, and specialized key-value records.