spring-data-relational
spring-data-relational copied to clipboard
Vastly refactored property --> jdbc value mapping api
Hello @schauder !
I have finally came up with the draft solution of #1136 issue. All test cases are running fine, and now it is far more clear, at least from my perspective. I will provide the definitive guide on how this mapping from property --> JdbcValue looks now. You should focus your attention at
createJdbcValue(@Nullable Object value, @Nullable Class<?> genericValueType, @NonNull Class<?> originalValueType)
method. This is where the new logic is implemented. Here is how it works:
-
If the provided value is
null
, then we just returnnull
wrapped withJdbcValue
withSQLType
as of original type of a value. Here is interesting thing - injava.sql.JDBCType
class there is aNULL
type, and almost all databases can accept this type, except DB2, who want the original type of the value to be present here, so that's why we have to passoriginalValueType
as parameter here, unfortunately. -
If it is not null, then we immediately apply the conversions defined by user or by us. This is important, since if user have defined the conversion from OffsetDateTime --> Timestamp, then he/she would expect that for field:
Timestamp createdAt
for instance the conversion will be applied, which was not the case since we have converted the value ourselves first. That is I think very-very important for related issues #1089 and #1127.
-
If resulting form covnertion value is
null
(and it was notnull
, we checked it previously), then we understand, that either from our own, or from user convertion the value was explicitly returned asnull
, so this is the result we or user wants, so we returnnull
wrapped withJdbcValue
withSQLType
as of original type of a value (because of DB2 again). -
If converted value is of type
JDBCType
(or original value can be of that type as well) - we just assume, and from previous code this was the case, that such value is a final result. So if user, or we, inside the framework, as a result of conversion returnJdbcValue
, then no further logic applied - just return the resultingJdbcValue
. -
If the converted, or original value is of type
AggregateReference
- then we recursively trying to createJdbcValue
forAggregateReference#id
, that logic I have just borrowed fromwriteValue
method, because this seems to be correct. -
Then, we need to understand, have we applied conversion or not. This part I think I should explain in details.
If we have applied the conversion and we got any generic type, then it will not be possible to deduce this generic type in runtime, just due to type erasure in java, since variable is local. However, if the conversion was not applied and initially there were generic type, then we will be provided from outside with the initial generic type. This will give us more precise JDBCType
in runtime. Otherwise we will just have JDBCType.UNKNOWN
. For example, both now, and prior to my changes, if user will create a converter, that converts some value to Set<String>
for instance (I would say it is very rare case, since we do not even have a test case in the project for this scenario), then framework will try to create ARRAY SQL with typeName
as UNKNOWN
, using this jdbc API:
Array createArrayOf(String typeName, Object[] elements)
Is that a problem? Yes, it is. Some Jdbc drivers accept this as array type, but some do not. So, here, we are limited to java restrictions, at least for now... So I decided to pass original generic type into method (if applicable for given value of course) because of this. Goal is to at least overcome this case when converted was not applied, which is the most often scenario.
The rest part is almost the same that was - if we have a collection as value, we convert it into array into the of most precise type we can. Then if the value is array we create SQL ARRAY value from it, which we used to as well. If value is simple - then I use JdbcUtil
class, as the previous code did as well.
Please, let me know, what you think about it... I am sure there is a lot to discuss, but we need to refactor this, at least for our understanding...
@schauder I hope you will get a chance to look at it. This PR will allow us to proceed on other related issues like I mentioned, and allow other contributors to have already refactored code as baseline.
Hi Jens @schauder ! Are you going to review it soon?
@schauder @patricklucas Will this PR be considered?
We are currently in the process of refactoring our converters with #1618. The recent changes have cleaned up our code by unifying several approaches into a single one. Looking forward, we want to investigate a conversion process to first transform all values into RowDocument
and derive statement values (insert/update) from there.
Do you want to revisit your pull request once #1618 is merged?
@mp911de Yeah, sure, I will