jasync-sql icon indicating copy to clipboard operation
jasync-sql copied to clipboard

Experiencing "[0A000] cached plan must not change result type" error in the postgresql

Open chancekim opened this issue 3 years ago • 5 comments

While we are using prepared statement with the postgresql, we are getting "[0A000] cached plan must not change result type" errors.

You can reproduce this error through the sequence below.

  1. create any table for the test. (Let's assume this table's name is A)
  2. create a postgresql connection.
  3. execute sendPreparedStatement("SELECT * FROM A") with the connection made at 2
  4. execute sendQuery("ALTER TABLE A ADD COLUMN added_column VARCHAR")
  5. execute sendPreparedStatement("SELECT * FROM A")
  6. error occurs

With the concrete connection, it is not difficult to deal with this error because we can close the connection and create a new connection which doesn't have any cached plan. In case of the connection pool, however, it is not easy because the connection pool doesn't close the certain connection when this error happened. I think if you provide a property in the connection pool configuration for handling this situation automatically, it will be much more convenient.

chancekim avatar Sep 15 '22 14:09 chancekim

Thank you for reporting an issue. See the wiki for documentation and gitter for questions.

github-actions[bot] avatar Sep 15 '22 14:09 github-actions[bot]

Hi, Can you specify what type of pool you are using? Is this com.github.jasync.sql.db.pool.ConnectionPool?

Also can you please specify how do you take/return objects? is it with use method?

It will be possible to add such an api (actually the pool has something similar on connection test timeout), but need to understand the use case first.

oshai avatar Sep 18 '22 22:09 oshai

Also, if you can provide a complete stacktrace that would help. I think on some errors the connection should be closed.

oshai avatar Sep 18 '22 22:09 oshai

Hi! Yes, I am using com.github.jasync.sql.db.pool.ConnectionPool now and using connectionPool.asSuspending() to take connection object. (As far as I know, it uses use method internally) Here's the stacktrace.

com.github.jasync.sql.db.postgresql.exceptions.GenericDatabaseException: ErrorMessage(fields=[(Severity, ERROR), (V, ERROR), (SQLSTATE, 0A000), (Message, cached plan must not change result type), (File, plancache.c), (Line, 718), (Routine, RevalidateCachedQuery)])
	at com.github.jasync.sql.db.postgresql.PostgreSQLConnection.onError(PostgreSQLConnection.kt:241)
	at com.github.jasync.sql.db.postgresql.codec.PostgreSQLConnectionHandler.channelRead0(PostgreSQLConnectionHandler.kt:177)
	at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324)
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:311)
	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:432)
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:719)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:655)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:581)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986)
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
	at java.base/java.lang.Thread.run(Thread.java:833)

I hope it would be helpful to investigate this problem :)

chancekim avatar Sep 19 '22 11:09 chancekim

Looking at some answers here it might be caused by a change to schema. Are you doing anything like that?

In such case you might want to evict all connections to avoid prepared statement caching.

oshai avatar Oct 11 '22 06:10 oshai

Another option is that this is prepared statement caching. If that is the case, use this method:

fun sendPreparedStatement(query: String, values: List<Any?>, release: Boolean): CompletableFuture<QueryResult>

with release = false

oshai avatar Dec 12 '22 20:12 oshai

Looking at the issue again, I think the best solution will be to support "soft eviction". I opened #371 for it.

oshai avatar Jan 20 '23 19:01 oshai

@oshai Yes, I changed a table schema and it caused this problem. I found out the connection is automatically evicted (or re-cache..?) when this error happens. So, I'm handling this issue by retrying same query n-times. (n is the size of connection pool).

chancekim avatar Jan 24 '23 11:01 chancekim

Soft eviction is now possible, so closing this issue.

oshai avatar Aug 11 '23 21:08 oshai