jackson-databind icon indicating copy to clipboard operation
jackson-databind copied to clipboard

Support `@JsonUnwrapped` with `@JsonCreator` (if possible)

Open cowtowncoder opened this issue 7 years ago • 17 comments

(note: follow-up for #265)

Current implementation of @JsonUnwrapped deserialization requires handling of all other properties first, and only then reconstructing unwrapped objects. This works ok via fields/setters, but can not work with @JsonCreator passed values.

But with a major rewrite it should be possible to make this work. This probably requires more support for actually determining unwrapped-properties a priori, instead of just buffering anything that doesn't match. Or maybe it only requires more logic in handling of creator.

cowtowncoder avatar Dec 02 '16 01:12 cowtowncoder

Any update on this? Would be nice with a way to get @JsonUnwrapped working for deserialisation as well.

Gabology avatar Jun 19 '17 18:06 Gabology

@OverlyExcessive It works just fine with deserialization if you use setters (or fields). Note that you do NOT have to pass ALL properties via creator; mixing and matching creator and setters/fields is legal.

This is not to say it wouldn't be great to allow passing unwrapped properties via Creator methods, just that this is the one aspect with unwrapped properties that does not work.

cowtowncoder avatar Jun 19 '17 23:06 cowtowncoder

@cowtowncoder You're correct, thanks!

Gabology avatar Oct 13 '17 09:10 Gabology

+1 Any update on this?

tpavs avatar Mar 21 '18 06:03 tpavs

Implementation of this is necessary to use @JsonUnwrapped in Kotlin.

frost13it avatar Jun 27 '18 12:06 frost13it

Hi ! This is my first attempt a contributing to the Jackson code base. I hope you will like my proposal to fix the current issue. It is a work in progress though. Comments and feedback are welcome.

Analysis of the current code

I am working on a Kotlin project and I really want to be able to have an unwrapped property as a parameter of a creator method so, in order to fix this, I started to have a look at the code. I must say that I am a bit surprised by the amount of duplicated code in the deserialization process. That's in part because execution paths branch off very early and we end-up using a different method for each one of the annotation combination.

I made a graph to illustrate what is going on: deserialize.pdf

I think that relying on a more generic approach would help to simplify the code, which would have many benefits, like the ability to easily implement the requested feature. The goal is to have only one main execution path for all the different scenarios.

Proposed solution

@cowtowncoder, you identified the problem pretty well: we obviously cannot wait until the end to construct the unwrapped property since we need sometimes need it in the creator method.

A tree of operations

My proposed approach is to build a tree of operations that need to be performed in order to successfully deserialize an object of a given class. That tree can then be used to perform the deserialisation itself.

Building the tree

  1. Identify the properties that are required in order to instantiate the class to deserialize. For classes where _propertyBasedCreator is true, we need to have all the properties required to call one of the creator methods. Same thing for other classes, although the list of properties to gather will be zero.
  2. For each one of the properties that are annotated with @JsonUnwrapped, perform the same analysis recursively. After this step is complete, we will end up with a hierarchy of required operations. Let's consider those two classes, for example:
public class Location {
  public int x;
  public in y;
  @JsonCreator Location(
    @JsonProperty(value = "x", required=true) int x,
    @JsonProperty(value="y", required=true) int y
  ) { /*...*/ }
}

public class Test {
  public String name;
  public Location location;
  @JsonPropery(required=true) public String someOtherThing;
  @JsonProperty public String someOptionalThing;
  @JsonCreator Test(
    @JsonProperty(value="name", required=true) String name,
    @JsonUnwrapped @JsonProperty(value="location", required=true) Location location
  ) { /* ... */ }
}

We would have this hierarchy of required operations to perform:

Instantiate class Test
|-  Call creator function
    |-  Deserialize property name
    |-  Instantiate class Location
        |-  Call creator function
            |-  Deserialize property x
            |-  Deserialize property y

Executing the deserialization

Once the operation tree is built, then...

  1. Build a map to be able to quickly find a deserialize operation based on the field name.
  2. Check if the root operation is flagged as having been performed. If so, we are done with the instantiation of the class. Continue to step 6.
  3. Read a token from the parser and, based on the field name, feed the the value to the appropriate deserialization operation. If no operation can be found, handle the unknown property as usual, possibly buffering it. If the token is an "end of object", signal it to every operation, starting with the leaves of the tree up to the root. If the root operation still has not been performed, raise an error.
  4. Notify parents of the operation that one of their dependency has been updated. Continue until the root has been reached.
  5. Go back to step 2.
  6. Read the instantiated object by calling the getResult() method on the root object.
  7. Set the other properties using the buffered tokens.
  8. Set other properties using the remaining tokens the way it is done right now, possibly through the current vanilla execution path, assuming we deal with the other @JsonUnwrapped properties first.

Error handling

If a requirement is not met, we would be able to generate a hierarchical exception like

Exception: Failed to instantiate class Test
Caused by Exception: Failed to instantiate class Location
Caused by Exception: Failed to deserialize required property "x"
Caused by Exception: Required property "x" is missing

Going one step further

Put all operations in the graph

I suggest that we go one step further down the generalization road and have a required deserialization operation to perform for each required property of the class. And optional properties should also be represented by the same kind of operations. Those optional operations can also have dependencies on required operations. For one thing, the class needs to be instantiated first, right ? For the example above, we would have this acyclic directed graph to build: operations.pdf

Rules for building the graph

Here are the rules that would be used to build the graph:

  1. Successfully deserializing an object requires instantiating it first.
  2. Each required property needs to be set.
  3. Setting a property of a class requires instantiating the class first.
  4. Instantiating an object requires calling the default constructor or calling a creator function.
  5. Calling a creator function requires collecting the required properties first, either through instantiation or deserialization.
  6. Collecting a property can be done by deserializing it or by instantiating it (if unwrapped) or by calling anySetter (if provided and if we are at the end of the object).

Execution

Tokens will be fed to the deserialization operations like explained in the first part of this document. One advantage, here, is that we don't need a separate buffer; all the needed deserialization operations are already present in the tree, and this is where the value will be stored while waiting for the class to be instantiated. That is because the set operation will be blocked as long the the instantiate operation it depends on has not been performed.

Performance considerations

For a given class, a template of the graph would have to be built once for all. After that, we could copy it quickly when performing each new deserialization.

plcarmel avatar Feb 29 '20 05:02 plcarmel

@plcarmel First of all, thank you for investigating this, and writing above analysis.

I agree on parts of amount of duplication being unfortunate; parts of this are for optimization, although earlier choices then do lead to combinatorial explosion. This is further compounded by separate builder-based deserializer and even properties-as-array variant.

Now: on rewrite as suggested... I have 2 main concerns:

  1. Performance. Adding more state can be good for being able to handle more cases, as well as to make processing more understandable. But adding another layer of abstraction for actual execution (I am not concerned about construction of such execution plan, fwtw) tends to have non-trivial overhead for optimal path. This both due to additional abstraction on probably book keeping. Latter might have upside too, of course, wrt handling of "required" properties (which are currently only considered if passed via constructor)
  2. Fundamental changes tend to add new failure cases on short term, even if longer term they can pay off by reducing complexity and allowing easier fixes.

Of these, (1) is easy enough to measure, although probably only after implementation is complete enough. It could be that my concerns are overblown. And perhaps it could be possible to avoid construction of various state objects for cases where more complex processing is not needed.

Regarding (2), I'd be tempted to suggest that if such rewrite was to be done, it should go with Jackson 3.x, to reduce likelihood of major issues with 2.x usage and some of extensions. I am guessing that changes would be needed to internal APIs, potentially breaking some use cases where users use sub-classing for handlers -- "advanced" usage that is using extension points that have not necessarily been design as such (but that in Java are difficult to really indicate as "do not extend", due to various reasons).

With that, I would be interested in seeing how a prototype could be made. I don't think I have time to work on this directly this, but I would do my best to help, read code and so on.

ps. Something that just occurred to me is that one area where I have actually thought about alternate implementation is non-blocking/async parsing -- jackson-core has such mode, but databind can not really use it. Deserialization in non-blocking mode would by necessity require different approach, feeding (push model) tokens, to be feasible (IMO). This is different from streaming level where pull-like approach is still feasible (and which is why it has been implemented). So one possible idea would be to start prototyping "alt databind" that would be built using push approach. If and when it worked well for that side, proving sufficient efficiency (non-blocking parsing itself is within 10% of blocking one, based on my testing), perhaps retrofit/rewrite of "standard" input would be more practical?

cowtowncoder avatar Mar 01 '20 03:03 cowtowncoder

@cowtowncoder, what you say makes a lot of sense and I sure don't want to break anything on a module that is used by every Java project on earth, nor slow it down.

I think that building a prototype for the push approach is a good idea, as those two things fit together pretty well.

I still have to figure out the details of how I am going to come up with something that can be tested quickly, and with a reasonable effort.

For future reference, here's the feature request on jackson-core for a non-blocking parsing: https://github.com/FasterXML/jackson-core/issues/57

plcarmel avatar Mar 01 '20 06:03 plcarmel

@plcarmel Yes. Also: it goes without saying that any incremental improvements for existing handling, simplifications, are welcome if you find some. Sometimes working on a prototype other orthogonal improvements are found as well.

Wrt async handling: parsing itself is fully implemented for streaming core: but the question is how to make databinding work with it -- right now things would not work since databinding can not really deal with JsonToken.NOT_AVAILABLE indicator. In addition, Smile format (~= binary json) implements it, and if there was enough interest, implementations should be quite straight-forward for CBOR, and at least possible for most other formats (like protobuf and avro)

cowtowncoder avatar Mar 02 '20 02:03 cowtowncoder

I think an easy option to have the deserialization perform well is to optimize the operation graph after it has been created, just like a compiler would do with the graph created from source code.

Large sections of the graph can be replaced with just one node that performs vanilla processing using the current algorithm, for example (provided deserialization is done in blocking mode).

plcarmel avatar Apr 13 '20 20:04 plcarmel

@cowtowncoder

I wrote a prototype, and it works. However, I just realized upon completing it that what I had was an LL parser ... which could have been done more easily and efficiently have I known it from the start.

50 years later, here I come... oh well.

At least I can now formalize my proposal :

For parsing a given class...

  1. Create rules for an LL parser based on the class fields, setters and on Jackson annotations/configuration, etc.

  2. (Optional) Optimize the rule table by replacing large number of rules by one single rule for those sequences of terminals that can be parsed easily, i.e. vanilla parsing, for example.

  3. Run the parser

Et voila !

plcarmel avatar May 14 '20 03:05 plcarmel

Who amongst us has not rewritten a LALR(1) parser at least once in their life! :-D

Actually it has been 25 years since I last read the Dragon Book so I have to re-check which class of parsers LL was.

Looking back now, thank you for the diagrams btw. Would you mind my reusing of them, if I was to write a short Medium blog post about how "standard" Jackson POJO deserialization works? (if I find time to do it) deserialize.pdf in particular is very useful visualization.

cowtowncoder avatar May 14 '20 22:05 cowtowncoder

@cowtowncoder LL(k) parsers are the simpler and less powerful ones. They start with the S symbol, and then do substitutions based on the symbol at the top of the stack and the k next tokens.

LL vs LR

More precisely,

LL parsers apply substitutions that expand the number of terminals, then discard them once they are matched. This is mostly what I did with my tree structure. The tree was the fully expanded set of rules and the job of the parser was to remove each and every single node that was not optional.

Optional nodes were the fully expanded, but not yet applied, or committed to, rules. One thing I realized was that my solution would not be able to parse arrays with everything always fully expanded. An array has an indefinite length (unless constrained), so I started to think about how to implement on-the-fly expansions. At this moment, I started to have flashbacks from my compiler class...

Also, with my trees, dependencies were used to make sure the parser was committed to one sub-tree until fully collapsed. And that collapsing idea mirrors the LL parser action of removing matched terminals from the stack of symbols, until there is nothing left. My solution was more loose because, without specifying the dependencies correctly, the parser would jump between sub-trees, which is undesirable in the context of parsing.

Finally, I had a method in each node that was called canHandleNextToken(), which is basically the look-ahead used to choose between different substitutions.

LR parsers, as far as substitutions are concerned, work the other way around. Instead of predicting the structure, they push terminals on the stack and try to reduce the number of symbols by applying the rules in the opposite direction. They are bottom-up parsers and are more powerful because they have more information available to chose between substitutions.

Example of an LL parse

For this simple class:

class X {
    public int a;
    public String b;
}

Rules

We would generate rules likes these bellow.

The syntax:

  • Terminals are lowercase, non-terminals are uppercase.
  • First line of the substitution contains the look-ahead, i.e. the value that is peeked at before deciding if a rule is going to be applied or not.
  • I have put the action to execute when the terminal is matched inside '{ }' brackets.
  • ϵ means "noting"
S ->

  | look-ahead: *
    ¯¯¯¯¯¯¯¯¯¯¯
    #1          start_object { obj = new X(); },
                CLASS_X_PROPERTY_LIST,
                end_object
          
CLASS_X_PROPERTY_LIST ->

  | look-ahead: end_object
    ¯¯¯¯¯¯¯¯¯¯¯
    #2          ϵ
  | look-ahead: *
    ¯¯¯¯¯¯¯¯¯¯¯
    #3          CLASS_X_PROPERTY,
                CLASS_X_PROPERTY_LIST

CLASS_X_PROPERTY ->

  | look-ahead: field_name[a]
    ¯¯¯¯¯¯¯¯¯¯¯
    #4          field_name[a],
                value_int { obj.a = parser.intValue(); }
  | look-ahead: field_name[b]
    ¯¯¯¯¯¯¯¯¯¯¯
    #5          field_name[b],
                value_string { obj.b = parser.stringValue(); }

In rule 1, from the start symbol, we go directly to the expected structure of X, since we are expecting to deserialize an object from this class, after all.

Execution

Now, let's say we run the parser with the rules above on the input string { "b" = "hello world" }. Those would be the steps taken.

Start stream: [ start_object, field_name[b], value_string, end_object ] symbols: [ S ] action performed: { } output: { }

Apply rule 1; it is the only one for the S symbol stream: [ start_object, field_name[b], value_string, end_object ] symbols: [ start_object, CLASS_X_PROPERTY_LIST, end_object ] action performed: { } output: {}

Match terminal start_object stream: [ field_name[b], value_string, end_object ] symbols: [ CLASS_X_PROPERTY_LIST, end_object ] action performed: { output.obj = new X() } output: { obj: X }

Apply rule 3; rule 2 cannot be applied because the look-ahead is not end_object stream: [ field_name[b], value_string, end_object ] symbols: [ CLASS_X_PROPERTY, CLASS_X_PROPERTY_LIST, end_object ] action performed: { } output: { obj: X }

Apply rule 5; it is the one for field_name[b] (and symbol CLASS_X_PROPERTY) stream: [ field_name[b], value_string, end_object ] symbols: [ field_name[b], value_string, CLASS_X_PROPERTY_LIST, end_object ] action performed: { } output: { obj: X }

Match terminal field_name[b] stream: [ value_string, end_object ] symbols: [ value_string, CLASS_X_PROPERTY_LIST, end_object ] action performed: { } output: { obj: X }

Match terminal value_string stream: [ end_object ] symbols: [ CLASS_X_PROPERTY_LIST, end_object ] action performed: { obj.b = parser.getText() } output: { obj: X }

Apply rule 2 stream: [ end_object ] symbols: [ end_object ] action performed: { } output: { obj: X }

Match terminal end_object stream: [ ] symbols: [ ] action performed: { } output: { obj: X }

Notes

We would have to add a few conveniences to our parser implementation, like the possibility to specify that a rule should be executed only once, to avoid having multiple values for the same property. It does not add more power to the grammar, but we would have a number of rules in the order of n! without the feature, where n is the number of properties.

plcarmel avatar May 15 '20 04:05 plcarmel

@cowtowncoder, no problem, you can reuse the diagrams.

Also, my prototype is here: [email protected]:plcarmel/jackson-databind-issue-1467-poc.git

In the tests, there is a function to generate the diagrams for the steps taken. They look like this:

graph1

graph2

graph3

graph4

graph5

plcarmel avatar May 15 '20 04:05 plcarmel

Hi All, is there any update on the whole topic? I am currently struggling with Kotlin Data-classes and JsonUnwrapped annotations, since the Data-classes are implicitly using @JsonCreator for the constructor.

OT: Nice proposal @plcarmel haven't seen something like that in a while.

Lacritz avatar Jan 21 '21 10:01 Lacritz

Thank you @Lacritz. Unfortunately, I won't have time to work on this anytime soon.

plcarmel avatar Jan 21 '21 23:01 plcarmel

Found some hack solution for Kotlin delegates, may be it will be useful to someone.

Just add val to delegating paramenter and, for smaller json, ignore duplicate properties

data class Report(
     val name: String,

     val params: DefaultParamsReport

) : DefaultParams by params

interface DefaultParams {
    @get:JsonIgnore
    val count: Long
}

data class DefaultParamsReport(
    @get:JsonIgnore(false)
    override val count: Long
): DefaultParams


class Deserialize {
    @Test
    fun tryDeserializing() {
        val mapper = ObjectMapper().registerKotlinModule()
        mapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false)

        val report = Report(
            "big-report",
            params = DefaultParamsReport(
                count = 1
            )
        )
        
        val str = mapper.writeValueAsString(report)
        println(str)
        val result = mapper.readValue(str, Report::class.java)
        println(result)
    }
}

Arsennikum avatar Apr 12 '22 11:04 Arsennikum

Just noticed that this isn't supported after spending a considerable amount of time on writing these kinds of data structures 😄

I created a PR (https://github.com/FasterXML/jackson-databind/pull/4271) which (at least) has a working test case that deserializes objects with @JsonUnwrapped in their creators. It's by no means anything close to the cleaner solutions above, but if those are planned for 3.x, perhaps this approach could be a starting point for solving the problem for 2.x.

fxshlein avatar Dec 20 '23 01:12 fxshlein