vertx-sql-client
vertx-sql-client copied to clipboard
Custom PostgreSQL type mapping
PostgreSQL does have a fixed type OID assignments for all the built in types. Extensions will add custom OID that do not have a specific OID and can change according to the database instance.
We need to support such types and have the ability to provide a dynamic mapping so that custom types can be correctly encoded/decoded.
So basically we can
- start by providing a OID <-> Codec table to the driver by the user (easy to do ?)
- add an utils to build such table using SQL
@vietj https://github.com/impossibl/pgjdbc-ng is a JDBC 4.2 compliant driver built on an asynchronous Netty core; I am the author.
One of the central tenants of the driver is to use a pluggable codecs in the client that match the inverse of those in the server (see https://github.com/impossibl/pgjdbc-ng/tree/develop/driver/src/main/java/com/impossibl/postgres/system/procs). I'm wondering if/how we could transfer this existing code (which is built, tested and used in quite a bit of production) to solve the problem outlined here.
Even as I am the driver maintainer, I am quite interested in moving to some version of a reactive driver that has all the same capabilities of a the JDBC implementations.
FYI, the reason I bring this to your attention is that you specifically talked about mapping OIDs to codecs and that's what pgjbdc-ng does.
@kdubb that sound interesting, can you elaborate how the mapping works ?
@vietj We tried to execute the large object Server side functions using a prepared statement and values from Tuple.
But this failed as the OID type (and also VOID?!?) is not mapped in the io.vertx.pgclient.impl.codec.DataTypeCodec. with 2 DEUBG logs:
- "Data type OID does not support binary encoding"
- "Data type VOID does not support binary encoding"
The 2 types should be added to Codec.
can you provide a reproducer to help us ? @dprincethg
Sorry, I cannot copy the code here (secured environment).
In short, for Large Object PUT functions
lo_put ( loid oid, offset bigint, data bytea ) → void Writes data starting at the given offset within the large object; the large object is enlarged if necessary. lo_put(24528, 1, '\xaa') →
The code looks like:
sqlConnection .prepare("SELECT lo_put ($1, $2, $3 );", ar -> { if (ar.succeeded()) { PreparedStatement preparedStatement = ar.result(); preparedStatement.query() .execute(Tuple.tuple().addLong(largeObjectOid).addLong(offset).addBuffer(buffer)), ar2 -> {
As workaround, we use jdbc.postgresql Large Object in an execute blocking to manage large object.
We don't use "simple" query , as we want to avoid to build the String sql with encoding of byte array (buffer) to hex string (/xaa....) due to perf impact issue.
It is related to another issue about "Large Object support" https://github.com/eclipse-vertx/vertx-sql-client/issues/46.
I think a dedicated API shall be provided to manage Large Object like jdbc.postgresql, but in Async way of course ;)
BTW,
As an alternative to the jdbc.postgresql LArge Object api, the PostgreeSQL documentation indicates this way
File file = new File("myimage.gif"); FileInputStream fis = new FileInputStream(file); PreparedStatement ps = conn.prepareStatement("INSERT INTO images VALUES (?, ?)"); (1) ps.setString(1, file.getName()); ps.setBinaryStream(2, fis, file.length()); ps.executeUpdate(); ps.close(); fis.close();
for table:
CREATE TABLE images (imgname text, imgoid oid);
As this alternative relies on simple "INSERT" query ( But with java InputStream as query parameter), the Vertx PosgreSQL client should be able to manage it.
However, in the Vertx Tuple there is no way to add a ReadStream < Buffer > (We can't use a addBuffer() method, as we have to handle a ReadStream)
@vietj Any help?
you can buffer the stream to a buffer first I think
you can buffer the stream to a buffer first I think
No, the stream is too large to be buffered. The purpose here is to stream large uploaded file content into DB Blob. (Large Object in mysql terminology)
I see, we don't support currently blob for postgres