Custom RowMapper through bean definition not registering
I have multiple domain aggregates using default row mappers. However, I have one that needs a custom row mapper. When attempting to register and use a custom row mapper for this certain domain through a bean definition of QueryMappingConfiguration, the row mapper is not used and still defaults to the default entity row mapper. I created a row mapper class:
public class TestObjectRowMapper implements RowMapper<TestObject> {
@Override
public TestObject mapRow(ResultSet rs, int rowNum) throws SQLException {
TestObject testObject = new TestObject();
...
return testObject;
}
}
as well as registering it in a bean of type QueryMappingConfiguration
@Bean
QueryMappingConfiguration rowMappers() {
return new DefaultQueryMappingConfiguration()
.registerRowMapper(TestObject.class, new TestObjectRowMapper());
}
According to the documentation, this should be a valid way of registering a row mapper. I want to avoid using the @Query annotation, as I will then have to override all of the existing findAll, findById methods in order to use my row mapper for those methods. Either I am missing something, or there is a bug in bean registration for row mappers.
Ideally, custom row mappers should work with methods inherited from extending CrudRepository/PagingAndSortingRepository as well as custom methods. Thanks in advance.
QueryMappingConfiguration is intended for custom queries only.
Interesting idea to apply it to the methods of the predefined methods as well.
Could you explain why you need a custom RowMapper for the aggregate?
So we can come up with an alternative solution or better understand why this would be a valuable feature.
When you can use the standard query, I would expect that you could use custom constructor instead.
If there are instances where an aggregate has columns in the database that are complex, such as a json file that is a representation of another object, my options would be:
- have the aggregate represent that column as a String which holds the json. A getter and setter can be applied to deserialize that json to its type.
- have the aggregate represent that column as the actual object type that it is. But this would require a custom row mapper in order to take the column input and map it appropriately before creating the aggregate.
If I were to choose option 2, I would want the row mapper to also be able to be applied to default methods such as findAll, findById, etc.
Can't you do the same with a custom conversion for the column/type?
As my understanding (correct me if I am wrong), if I use reading/writing converters, that would apply to every field with that type in the database, which I do not want. For example: if I had a reading converter that maps a String to a Person Object, wouldn't that try to map every varchar in the database to a Person object? I do not see anywhere in the documentation that specifies a way to register a converter only for a specific table and column.
This is handled by @ReadingConverter and @WritingConverter.
Let's say you have a @WritingConverter from Person to String and a @ReadingConverter in for the inverse conversion.
The writing converter would be applied to every Person in you domain model and thereby a Person could be stored in a VARCHAR field.
The reading converter would be used for values that are of type String, when loaded from the database and belong to a property of type Person. Other properties that already are of type String or some completely different type (WaterBottle) would be completely unaffected.
In many cases this should do the trick. If you have Person instances in your model that should get converted to a String and other Person instances you want to store as Numbers you would be out of luck right now.
Does that solve your problem?
Yes I believe that would solve my immediate problem. However, I still think that it would make sense that if someone registers a custom row mapper, which would apply to all custom repository methods, it should also apply to all other default repository methods for that aggregate. Otherwise, what is the point in having some methods that use that row mapper and some that do not? If a rowmapper is defined for an aggregate, it seems like it should be assumed that the rowmapper will be used for all methods under that aggregate.
Also an example that I can think of that could be practical: say I have an enum that is stored in the DB as it's string representation for one table, but it's stored as it's ordinal integer value in another table. The converter would not work in this case, and my only options would be to use a rowmapper or set custom getters and setters.
Also, do converters work with Lists? For example, if I have a List<Person> as a type, it would be great for the reading/writing converters to work with that type as well.
When we created the option to use row mappers with @Query it was intended for specialised queries that were not intended for CRUD operation. Using row mappers for the standard read operation would also leave the question of what to do with writing operations.
Anyway, we'll discuss this internally.
Also, do converters work with Lists? For example, if I have a List<Person> as a type, it would be great for the reading/writing converters to work with that type as well.
No, they currently don't work with collections, which are handled by special code. But changing that would improve things considerably. Could you please create a separate issue for that?
Sure thing. Thanks for all of the quick responses!
We finally discussed this internally and decided support for RowMapper registration use in standard methods and part tree methods would be a valid improvement.
If you are interested a PR would be welcome.
Hello @schauder, I'm currently trying to implement this feature. I'm still having some questions that I would love to hear your opinion on.
As described above, QueryMappingConfiguration can be used to register RowMapper-instances for custom repository queries. When extending this idea to the standard methods defined in SimpleJdbcRepository, I'm running into some problems. SimpleJdbcRepository effectively uses an implementation of DataAccessStrategy to generate and execute the queries. To override the default EntityRowMapper used in DefaultDataAccessStrategy, I would have to inject QueryMappingConfiguration into this class. This, however, causes a cyclic dependency between org.springframework.data.jdbc.core and org.springframework.data.jdbc.repository.
To solve this, I thought of doing one of the following things:
- Move
QueryMappingConfigurationtoorg.springframework.data.jdbc.core. This is probably not wanted. - Introduce a new abstraction, for example
RowMapperRegistrationinorg.springframework.data.jdbc.core. By default, this class contains an EntityRowMapper but can be supplemented by additionalRowMappers defined throughQueryMappingConfiguration.
Thanks in advance for your time and input!
I agree with you that the second option sounds better.