R2dbc connection pool returns multiple pooled objects pointing to same database connection
R2dbc returns multiple pooled objects pointing to same database connection in case downstream gets disposed, e.g.
- Downstream
Monoconverted fromFluxor otherPublisher(e.g. any RxJava type) - Dispose called from different thread, e.g.
Flux.onError,Mono.onErrorinvoked from timeout operator on different scheduler - Number of results is limited via
Flux.take
Dispose event in downstream causes ref.release().subscribe(); to be invoked that releases pooled references (without calling preDestroy or any kind of logging event). As a result another pending invocation ConnectionPool#create receives same database connection, that has not yet been closed and has not finished processing in initial subscription.
This causes connection sharing issues on higher load that lead to lost cursor and data consistency issues (when transactions are leaked).
Please see https://github.com/r2dbc/r2dbc-pool/pull/210 for unit test reproducing the issue and proposed solution - in case more full-proof solution is required connection dispose hook can be added on PooledConnection via Cleaner
this looks like a really nasty bug and no attention from maintainers since May? 🤯
is it possible to ask @mp911de to comment on this, is it considered an issue, critical/non-critical, any plans about it? Thanks a lot in advance 🙏
The overall issue is that Broadcom has almost halved our team who was looking into R2DBC. We have no capacity to look into projects other than our core duties. This is a really depressing situation for me as I spent a lot of time on R2DBC to bring it into a proper state. It is painful to see how things fall apart and that I cannot spend time here.
That being said, any help from the community is greatly appreciated, especially since these topics aren't simple. It requires a lot of time to mentally get into the problems and analyze what is going on before a fix can be made.
Thanks Mark! Really sad that this is the reality... Stay strong! 🙏🙏
Any chances having this fixed ? :(