Selecting and reading composite types?
This could be an issue, but at the moment it's just a question. If I have a type in the DB:
create type foo as (a int, b text, c timestamp);
And then I read a foo value in some select statement, it seems all the Java/Postgres APIs out there don't offer anything but string parsing. Is there no way to do something like:
var obj = rowReader.get(row, ...)
obj.getInt(1)
obj.getString(2)
obj.getLocalDateTime(3)
I don't know all the rules of string parsing. Maybe it's simple. Custom types can nest and strings could need escaping, so it seems like it would be ugly.
Yeah, I don't expose much as public in the RowReader, but since it was a goal of this project, I do expose things as protected that can help here (I hid protected items in the Javadoc). Look at how arrays and hstores are read in there. You can extend RowReader to expose what you want there. You can then implement Converters.To to essentially do what those array/hstore calls are doing.
I would also accept a simple PR that made a new DataType.Composite class which had a String[] of values (and accompanying converter and test and what not). Otherwise, I don't think I'll get to it any time soon.
I've added a calendar reminder for myself to maybe do this later.
This is off-topic, but I don't see a discussion group or anything so I'll ask it here... I may end up using JDBC because the API is simpler and it's not clear to me how I get more performance from an async library, considering the PG server doesn't accept that many parallel connections and I'm using a connection pool. What do you think? If you have a pool of limited connections, how does this library achieve better performance over something like using plain JDBC and putting operations in a concurrent queue so as to not block frontend threads?
It might not achieve better performance. But in addition to maybe achieving better performance (you'd have to benchmark your use case), it also exposes some features at a low level making it more powerful. I can say I've personally achieved much better performance than JDBC due to already being in an async pipeline with the rest of my code.